Post Reply 
Rounding Error Worse in Binary or Decimal Representation?
01-28-2015, 02:26 PM (This post was last modified: 01-28-2015 03:00 PM by Claudio L..)
Post: #2
RE: Rounding Error Worse in Binary or Decimal Representation?
(01-28-2015 01:51 PM)Gerald H Wrote:  This thread

raises the question of reducing rounding errors by using binary instead of decimal representation or vice versa.

My view is that the as the number of primes included in the factorization of the number base increases, rounding errors will be reduced or avoided.

Accordingly, 2*5 as base should perform better than 2.

Both would be majorized by 2*3*5, & so on.

Or am I way off course?

I'd say you are definitely on the right track.
The more primes you have on the base, more numbers can be represented exactly in your base, so you will eliminate rounding in some operations.
When rounding happens, though, I'd say it's the same for all bases:
In any base, you round based on whether the digits to be discarded are more or less than 1/2 the last digit you are going to keep. It all depends on the magnitude of that last digit.
However, binary numbers are a more "compact" form of storage, so you can store more digits in the same amount of memory. This is why many people claim that binary is better for rounding. Not because of rounding itself, but because at equal amounts of storage, your last digit will have a smaller magnitude, and so will your rounding error.
If you consider integer numbers, for example, rounding in binary or BCD would have the exact same error: +/-0.5
If you add one more BCD digit (nnn.x), the error becomes +/-0.05. But in the same space of a BCD digit, you can pack 3 or 4 bits (depending on the BCD encoding you use). Assuming a simple BCD encoding of 4 bits per digit, adding 4 bits (1/2^4=0.0625 for the last digit), the error would be +/-0.03125, and in this case binary is significantly better for rounding.
More advanced BCD encoding can produce better results (for example, using 10 bits (0-1023) to encode 3 BCD digits (0-999) is a very compact form where the error is very similar:
Error=1/(2*1024)=+/-0.000488 in binary
Error=1/(2*1000)=+/-0.0005 in BCD

Still, binary will have an edge over any other representation, at least until they develop ram memory that can hold more than 2 states.

EDIT: When I say it has an edge, I mean in the magnitude of the error only. When you put everything in the balance, 0.0005/0.000488 only has 2% more error, but it can represent all multiples of 5 with zero error too. With your proposed base 30, encoded with 5 bits per digit will be very compact (0-31 vs 0-29), the error magnitude would be only 6.7% higher but you'd represent a lot more numbers exactly, so there's no absolute winner.

Find all posts by this user
Quote this message in a reply
Post Reply 

Messages In This Thread
RE: Rounding Error Worse in Binary or Decimal Representation? - Claudio L. - 01-28-2015 02:26 PM

User(s) browsing this thread: 1 Guest(s)