Post Reply 
Implement [ Variable ] Precision Floating Point [ Arithmetic ] in CAS Mode
12-01-2015, 02:30 PM
Post: #7
implementaton of dynamic precision interval/range arithmetic
I would like to chip in to say that variable precision arithmetic, if attempted should be a form of interval arithmetic.

Interval arithmetic allows the capture of numerical uncertainty. So that unstable calculations do not lead to introduction of garbage. A classic example is the cross product:

A*B - B*C

if A*B is close to B*C, many digits of precision can be annihilated. For a fixed number of digits implementation, you wind up "pulling up" garbage digits from the bottom to fill those removed from the top without really knowing it.

Cross products like this appear all the time. eg regular complex number multiply.

The other reason is, of course, you need dynamic precision in examples like this to temporarily extend the working space for the A*B multiply.

I have implemented a version of dynamic precision interval arithmetic that i use for my own calculations. The implementation follows ideas of Oliver Aberth, 1988 who suggests representing the interval an "extra digit".

Aberth explains how you can represent an interval [a,b] in what would otherwise be two numbers in only slightly more space than just one number:

First, represent [a,b] as a mid point and a range, [m-r,m+r]

You can think of this representation as "m", the number to display and "r" the error, or uncertainty. Treating `r' is an error term allow it to be contained in a single digit.

eg.

M = 3.14159
R = 0.00002

says we have [3.14157,3.14161]

Here we arrange the `r' term to coincide with the least significant digit of m.
Suppose now the error term increases due to unstable calculation to,

M = 3.14159
R = 0.00020

= [3.14139,3.14179]

Then the final 9 of m is swamped by our error and can be dropped. so instead we adjust our range and mid, as follows:

M = 3.1415
R = 0.0003

= [1.1412,3.1418]

So we've lost a digit of m due to the increase in error. You may also notice that our error term has increased by one in order to accommodate rounding, because we _must_ ensure our range does not decrease.

In my implementation, i do not use one decimal digits as illustrated above, but bunches of 4 digits. My BCD implementation is based on 10000 rather than 10. Each "digit" is actually 4 decimal digits, each from 0 to 9999.

This implementation has many advantages. The biggest is that your arithmetic calculates 4 digits at a time and not just one. Each of my 4 digits, called a "4dec" is stored internally as a 16 bit _binary_ number. Accordingly, arithmetic on each 4dec proceeds as normal machine binary operations with carry over 9999.

Even embedded hardware has 16 bit * 16bit -> 32 bit hardware integer multiply. which is a major boon to the multiplier.

This technique brings the performance of 4DEC BCD into a comparable ballpark as software binary floats. rather than being way, way slower.

BUT there's another big win for range arithmetic. the error term is now a 4dec and can thus accommodate a wider range before rounding. In other words, there's is less truncation of M as R increases; R could gain 10 fold without losing a M 4dec as it does with the simple BCD above.

Back to representation we store, eg

M = 3.1415 x 10^-3
R = 0.0003 x 10^-3

internally as:

3.14153 exponent -3

and print out as either:

3.1415+/-3 x 10^-3 or just 3.141 x 10^-3

Since normally we suppress all uncertainty in display.

This also illustrates that the exponent in both M and range are is same number.

The bottom line is that you get to store interval arithmetic, more usefully as a term with an error range as just ONE EXTRA DIGIT of storage.

For sure this is how calculators should work and, in fact, how they should of worked for the last 20 years.

I'd be happy to advise with this kind of implementation. my own code words for the usual arithmetic and some scientific functions. I have to go back to the project to finish the other scientific functions (and test them).

The implementation of scientific functions is a bit more involved than normal, but not a lot. You have to have an error term to feed back to your range.

for example, say you use Taylor series for Sin(x). you can use the truncated term as an error bound that you merge with any existing error range introduced by the series arithmetic itself. This gives you a final upper bound on your error.

Everything has to have an error upper bound and then you're good.
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
implementaton of dynamic precision interval/range arithmetic - hugh - 12-01-2015 02:30 PM



User(s) browsing this thread: 1 Guest(s)