Post Reply 
Error margins
10-27-2024, 01:45 PM (This post was last modified: 10-27-2024 01:47 PM by Maximilian Hohmann.)
Post: #1
Error margins
Hello,

over the last two days we have been treated to fascinating demonstations and insights into Christophe de Dinechin's ongoing development and refinement of his DB48X project. When you see the developer present his product you instinctively think: "This is the calculator that I want to have!", even if you don't really need it :-)

On my way home I thought about it and the arbitrary precision arithmetic implemented and how that relates to real life uses. During my study time we had to do eperiments / measurements, e.g. some wind-tunnel measurements, process the data and write a report about it. This always had to include an error analyis, which - for me at least - was the most difficult and tedious part.

I have no idea if that already exists somewhere (I guess it does but I don't have seen it yet), but a data type "Real World Number" that includes a value, an error margin, the unit (optional) and maybe the sensor designation so that you don't have to enter the data every time.
For example the datasheet of an arbitrarily selected digital laboratory thermometer for the range between -50°C and 250°C states a typical error of +/-0.648°C. So a real world measurement input of 33.5°C from that sensor might look like this: [33.5;-0.648;0.648;°C;"MCC USB-TC"]

Now wouldn't it be useful if the calculator could "propagate" that error through all the calculations done with data of this type and, combined with the other input data and their own error margins, give an end result that contains the total error margin? Again, this may already exist in some form, but I have yet so see it.

Regards
Max
Find all posts by this user
Quote this message in a reply
10-27-2024, 02:51 PM
Post: #2
RE: Error margins
(10-27-2024 01:45 PM)Maximilian Hohmann Wrote:  Now wouldn't it be useful if the calculator could "propagate" that error through all the calculations done with data of this type and, combined with the other input data and their own error margins, give an end result that contains the total error margin? Again, this may already exist in some form, but I have yet so see it.

You might consider dual numbers:
Automatic differentiation using dual numbers
Here's an example on how this could be used for a circuit of two parallel resistors.
Find all posts by this user
Quote this message in a reply
10-27-2024, 06:12 PM
Post: #3
RE: Error margins
(10-27-2024 01:45 PM)Maximilian Hohmann Wrote:  Now wouldn't it be useful if the calculator could "propagate" that error through all the calculations done with data of this type and, combined with the other input data and their own error margins, give an end result that contains the total error margin? Again, this may already exist in some form, but I have yet so see it.

Seeing and getting used to how RPL handles units - i.e. beautifully - I have wished for this exact same functionality. I have since seen it as a package for a full-blown programming language, but did not need it at that time or badly enough since so I have not tested it - I don't even remember whether this was for Python or for Julia. But it did exist, in one of the two, and seemed to do exactly this.

The wish is seconded!
Find all posts by this user
Quote this message in a reply
10-28-2024, 08:23 AM
Post: #4
RE: Error margins
(10-27-2024 06:12 PM)LinusSch Wrote:  The wish is seconded!

Thirded! Somewhat related, would be full support for an array datatype. That way you can do calculations on a 'range' of values, which could represent min, mean and max. This is kinda useful if you're starting with a range and I sometimes use a matrix on my DM42 for this. But there are some things you can't do in a matrix (like date/time calculations) and of course a matrix does some strange maths-y stuff which isn't what I want...
Find all posts by this user
Quote this message in a reply
10-28-2024, 03:02 PM
Post: #5
RE: Error margins
(10-28-2024 08:23 AM)dm319 Wrote:  Somewhat related, would be full support for an array datatype. That way you can do calculations on a 'range' of values, which could represent min, mean and max. This is kinda useful if you're starting with a range and I sometimes use a matrix on my DM42 for this. But there are some things you can't do in a matrix (like date/time calculations) and of course a matrix does some strange maths-y stuff which isn't what I want...

This part of the documentation may not be the most explicit and complete yet, but it sure seems like the array implementation is pretty complete: https://github.com/c3d/db48x/blob/stable...ifferences

I'm not entirely sure how much that reflects the current state of things or the goal, either, but I have read somewhere else that multi-dimensional arrays are also supported.
Find all posts by this user
Quote this message in a reply
10-28-2024, 03:31 PM
Post: #6
RE: Error margins
(10-28-2024 03:02 PM)LinusSch Wrote:  This part of the documentation may not be the most explicit and complete yet, but it sure seems like the array implementation is pretty complete: https://github.com/c3d/db48x/blob/stable...ifferences

Oooooh, they even use the term 'vectors' which I think is a great term for a 1 dimensional array. That is very exciting.
Find all posts by this user
Quote this message in a reply
10-28-2024, 03:33 PM
Post: #7
RE: Error margins
(10-27-2024 02:51 PM)Thomas Klemm Wrote:  You might consider dual numbers:
Automatic differentiation using dual numbers
Here's an example on how this could be used for a circuit of two parallel resistors.

FYI, this only work with + * (parallel resistance intervals work due to luck)
For - /, we need to flip the sign of 2nd number, to maximize intervals
Other changes may need to adapt to other functions.

lua> R1 = D.new(3,4e-5) ≡ 3 ± 4*ε
lua> R2 = D.new(5,6e-5) ≡ 5 ± 6*ε
lua> pprint(R1-R2) -- BAD
{ -2, -1.9999999999999998e-05 }
lua> pprint(R1/R2) -- BAD
{ 0.6, 8e-07 }

lua> R2[2] = -R2[2] -- flip sign
lua> pprint(R1-R2) -- OK
{ -2, 1e-04 }
lua> pprint(R1/R2) -- OK
{ 0.6, 1.52e-05 }

Note, last is not true interval, but if ε is small, it is good enough.
Interval math, center is affected by ε term. Dual does not do that.

R1/R2 = (3-4ε)/(5+6ε) .. (3+4ε)/(5-6ε) ≈ 0.6000000001824 ± 1.520000000219e-5



For more complicated expression, however, end-points may not work.
For f(x±eps), we really need to consider the whole shape.

a = min(f(x±eps))
b = max(f(x±eps))
f(x±eps) = (b+a)/2 ± (b-a)/2

Worse, calculations cannot not be "chained".
If we apply g on top, f(x±eps) calculations should be tossed.

a = min(g(f(x±eps)))
b = max(g(f(x±eps)))
g(f(x±eps)) = (b+a)/2 ± (b-a)/2

Of course, we can ignore all this, and just use interval end-points.
But, resulting intervals may not be right. Garbage in, Garbage out.
Find all posts by this user
Quote this message in a reply
10-28-2024, 04:59 PM
Post: #8
RE: Error margins
(10-28-2024 03:33 PM)Albert Chan Wrote:  
(10-27-2024 02:51 PM)Thomas Klemm Wrote:  You might consider dual numbers:
Automatic differentiation using dual numbers
Here's an example on how this could be used for a circuit of two parallel resistors.

FYI, this only work with + * (parallel resistance intervals work due to luck)
For - /, we need to flip the sign of 2nd number, to maximize intervals
Other changes may need to adapt to other functions.

If you calculate a value based on measured parameters that are known with uncertainty (error), the canonical way of estimating the uncertainty of the calculated value is to multiply each error by the partial derivative respect to each parameter and then add them all in quadrature (square root of the sum of the squares). This of course assumes small errors as it is a Taylor expansion and statistical independence of the parameters.
Back in college I wrote a program for my 41 that calculated the derivatives numerically and used it to estimate the propagated error.
Funnily enough that lead to the only time in 40 years that I used my 41 for work. I needed to come up with an alternative resistor bridge design using the AD5254 digital pot (30% unit to unit tolerance!) that would not be hopelessly inaccurate. I used the 41 program to quickly evaluate alternatives.
Find all posts by this user
Quote this message in a reply
10-28-2024, 05:34 PM (This post was last modified: 10-28-2024 05:39 PM by Albert Chan.)
Post: #9
RE: Error margins
Hi, born2laser

Just to be clear, you mean like this?

Dual setup (corrected for interval use):

R1 - R2 = (3±4ε) - (5±6ε) → -2 ± (4+6)ε = -2 ± 10ε
R1 / R2 = (3±4ε) / (5±6ε) → (3/5) * (1 ± (4/3+6/5)ε) ≈ 0.6 ± 1.52ε


Square root of the sum of the squares (i.e. hypot):

R1 - R2 = (3±4ε) - (5±6ε) → -2 ± hypot(4,6)*ε ≈ -2 ± 7.21ε
R1 / R2 = (3±4ε) / (5±6ε) → (3/5) * (1 ± hypot(4/3,6/5)*ε) ≈ 0.6 ± 1.08ε
Find all posts by this user
Quote this message in a reply
10-28-2024, 05:47 PM
Post: #10
RE: Error margins
(10-28-2024 04:59 PM)born2laser Wrote:  If you calculate a value based on measured parameters that are known with uncertainty (error), the canonical way of estimating the uncertainty of the calculated value is to multiply each error by the partial derivative respect to each parameter and then add them all in quadrature (square root of the sum of the squares). This of course assumes small errors as it is a Taylor expansion and statistical independence of the parameters.
Yes, the canonical way is the way to go, however...
(10-28-2024 04:59 PM)born2laser Wrote:  Back in college I wrote a program for my 41 that calculated the derivatives numerically and used it to estimate the propagated error...
The issue with calculating derivatives numerically is that the process of numerical differentiation (taking a difference divided by a difference) dramatically increases noise (i.e., error). For example, if starting with position data, after two numerical differentiations to find acceleration, noise resulting from the numerical differentiation operations can make the signal unusable. Differentiation acts as a high-pass filter.

Automatic differentiation (as suggested by Thomas) is a better alternative.

However, if requirements are simple, the following three rules for addition & subtraction, multiplication & division, and exponentiation (special cases of the canonical way) may suffice:
Rule 1: + - : quadrature sum the errors to get the error of the result.
Rule 2: x / : quadrature sum the fractional errors to get the fractional error of the result.
Rule 3: z^n : multiply the fractional error by the exponent n to get the fractional error of the result.
Find all posts by this user
Quote this message in a reply
10-28-2024, 07:24 PM
Post: #11
RE: Error margins
On this topic I would like to hint to the HP42s program uprop made by Mitch Richling. There complex numbers are used to propagate errors (std.dev.).

Sure an extra data type to add non-zero measurement uncertainties to (mean) values in conjunction with usage of units would have great practical value for evaluation of measured data.
Find all posts by this user
Quote this message in a reply
10-28-2024, 07:49 PM
Post: #12
RE: Error margins
(10-28-2024 04:59 PM)born2laser Wrote:  If you calculate a value based on measured parameters that are known with uncertainty (error), the canonical way of estimating the uncertainty of the calculated value is to multiply each error by the partial derivative respect to each parameter and then add them all in quadrature (square root of the sum of the squares).

Assume that we have the following program saved as II (since || isn't valid):
Code:
« DINV SWAP DINV + DINV »

Then we can calculate this with:

(3 4) 5 II

(1.875,1.5625)

IM

1.5625

3 (5 6) II

(1.875,0.843750000001)

IM

0.843750000001

R→C
ABS

1.77575908065

This is a bit tedious, so it's time to write a little program.
We can see that in both cases the real values (i.e. 1.875) agree.
Find all posts by this user
Quote this message in a reply
10-28-2024, 08:08 PM
Post: #13
RE: Error margins
(10-28-2024 07:24 PM)raprism Wrote:  On this topic I would like to hint to the HP42s program uprop made by Mitch Richling.

Thanks. This work well!
Note: Quoted function by clicking the menu, not actual key.

XEQ "UPROP"

3 Enter 4 Complex "1/X"
5 Enter 6 Complex "1/X" "+" "1/X"

1.875+1.775759080646921540612787552321336i      ; resistance in parallel
Find all posts by this user
Quote this message in a reply
10-28-2024, 08:35 PM
Post: #14
RE: Error margins
(10-28-2024 05:47 PM)carey Wrote:  The issue with calculating derivatives numerically is that the process of numerical differentiation (taking a difference divided by a difference) dramatically increases noise (i.e., error).

You are absolutely right in what you say, in particular if you are trying to calculate the derivative of a function with added fluctuations, but that is not what I meant. To propagate error you need to take the derivative of the formula used to calculate the value based on the parameters, instead of putting pen to paper to work out the partial derivatives I just programmed that function as an RPN program and had a program that then approximated the derivative with a numerical method (evaluating the function at the point of interest and varying the value of one parameter at a time). It is good for any function and involves approximating the function locally with a polynomial of any desired degree, it is most definitely not good for experimental data with noise, but good for mathematical functions. I used degree=3, which is overkill, but in my defense I have to say I was reading a book on numerical methods at the time and had a 41 in my hands!
Find all posts by this user
Quote this message in a reply
10-28-2024, 08:39 PM (This post was last modified: 10-28-2024 08:41 PM by Mark Power.)
Post: #15
RE: Error margins
Hugh Steers showed a system, on the HP50g, for dynamic precision and error propagation at the HPCC Conference in 2007. You might want to look up the video, titled "Formlua: A New Programming Environment for HP Calculators" on Eric's Conference Video page.

Combine that with the RPL tag system, to add the sensor name, and RPL units system, and you have a pretty complete system.
Find all posts by this user
Quote this message in a reply
10-28-2024, 09:37 PM
Post: #16
RE: Error margins
(10-28-2024 08:35 PM)born2laser Wrote:  
(10-28-2024 05:47 PM)carey Wrote:  The issue with calculating derivatives numerically is that the process of numerical differentiation (taking a difference divided by a difference) dramatically increases noise (i.e., error).

You are absolutely right in what you say, in particular if you are trying to calculate the derivative of a function with added fluctuations, but that is not what I meant. To propagate error you need to take the derivative of the formula used to calculate the value based on the parameters, instead of putting pen to paper to work out the partial derivatives I just programmed that function as an RPN program and had a program that then approximated the derivative with a numerical method (evaluating the function at the point of interest and varying the value of one parameter at a time). It is good for any function and involves approximating the function locally with a polynomial of any desired degree, it is most definitely not good for experimental data with noise, but good for mathematical functions. I used degree=3, which is overkill, but in my defense I have to say I was reading a book on numerical methods at the time and had a 41 in my hands!

Thank you for the detailed explanation!
Find all posts by this user
Quote this message in a reply
10-28-2024, 10:24 PM
Post: #17
RE: Error margins
(10-28-2024 07:49 PM)Thomas Klemm Wrote:  1.77575908065

(10-28-2024 08:08 PM)Albert Chan Wrote:  1.875+1.775759080646921540612787552321336i

Nice, it appears that the results agree.
Find all posts by this user
Quote this message in a reply
10-29-2024, 03:07 PM (This post was last modified: 10-29-2024 05:26 PM by Albert Chan.)
Post: #18
RE: Error margins
(10-28-2024 03:33 PM)Albert Chan Wrote:  FYI, [dual] only work with + * (parallel resistance intervals work due to luck)
For - /, we need to flip the sign of 2nd number, to maximize intervals
Other changes may need to adapt to other functions.

Correction, it may not work with + * either, we need to match the sign.
The fix is amazingly simple, just change the way 'sum' work
All other f(x) work out of the box for both derivative or uncertainly, if we ignore its sign (*)

Code:
D = {} -- note: f1, f2 = f, f'
D.__index = D

D.sum3 = function(x,y) return abs(x) + abs(y) end    -- ball arithmetic
D.sum2 = hypot                                       -- quadrature sum
D.sum1 = function(x,y) return max(abs(x),abs(y)) end -- dominant error
D.sum0 = function(x,y) return x + y end              -- plain sum
D.sum  = D.sum0 -- default

function D.__add(f,g)
    if type(f)=='number' then f=D.new(f) end
    if type(g)=='number' then g=D.new(g) end
    return D.new(f[1]+g[1], D.sum(f[2],g[2]))
end
function D.__sub(f,g)
    if type(f)=='number' then f=D.new(f) end
    if type(g)=='number' then g=D.new(g) end
    return D.new(f[1]-g[1], D.sum(f[2],-g[2]))
end
function D.__mul(f,g)
    if type(f)=='number' then f=D.new(f) end
    if type(g)=='number' then g=D.new(g) end
    return D.new(f[1]*g[1], D.sum(f[2]*g[1],f[1]*g[2]))
end
function D.__div(f,g)
    if type(f)=='number' then f=D.new(f) end
    if type(g)=='number' then g=D.new(g) end
    return D.new(f[1]/g[1], D.sum(f[2]*g[1],-f[1]*g[2])/g[1]^2)
end
function D.__pow(f,g)
    if type(f)=='number' then f=D.new(f) end
    if type(g)=='number' then g=D.new(g) end
    local r = pow(f[1],g[1])
    return D.new(r, r*D.sum(log(f[1])*g[2],f[2]/f[1]*g[1]))
end

function D.new(f1,f2) return setmetatable({f1,f2 or 0}, D) end
function D.__unm(f)   return D.new(-f[1], -f[2]) end -- negate
function D.add(f,a) return D.new(f[1]+a,f[2]) end
function D.sub(f,a) return D.new(f[1]-a,f[2]) end
function D.mul(f,a) return D.new(f[1]*a,f[2]*a) end
function D.div(f,a) return D.new(f[1]/a,f[2]/a) end
function D.rcp(f,a)  a = (a or 1)/f[1]   ; return D.new(a, f[2]*a/-f[1])    end
function D.sqrt(f)   local r=sqrt(f[1])  ; return D.new(r, f[2]*r/(f[1]*2)) end
function D.surd(f,n) local r=surd(f[1],n); return D.new(r, f[2]*r/(f[1]*n)) end

function D.abs(f) return D.mul(f, signbit(f[1]) and -1 or 1) end
function D.log(f) return D.new(log(f[1]), f[2]/f[1]) end
function D.log1p(f) return D.new(log1p(f[1]), f[2]/(f[1]+1)) end
function D.exp(f)   local r=exp(f[1])  ; return D.new(r, r*f[2]) end
function D.expm1(f) local r=expm1(f[1]); return D.new(r, (r+1)*f[2]) end

function D.sin(f) return D.new(sin(f[1]),  cos(f[1])*f[2]) end
function D.cos(f) return D.new(cos(f[1]), -sin(f[1])*f[2]) end
function D.tan(f)  local r=tan(f[1]) ; return D.new(r, f[2]*(1+r*r)) end
function D.asin(f) local r=asin(f[1]); return D.new(r,  f[2]/cos(r)) end
function D.acos(f) local r=acos(f[1]); return D.new(r, -f[2]/sin(r)) end
function D.atan(f) local r=atan(f[1]); return D.new(r, f[2]/(1+f[1]^2)) end

function D.copysign(f,s1,s2)
    return D.new(copysign(f[1], s1 or f[1]), copysign(f[2], s2 or f[2]))
end

lua> R1 = D.new(3,4)
lua> R2 = D.new(5,-6)

lua> D.sum = D.sum0 -- eps field is derivative (default)
lua> pprint(1/(1/R1 + 1/R2))
{ 1.875, 0.71875 }
lua> D.sum = D.sum1 -- eps field is dominant error
lua> pprint(1/(1/R1 + 1/R2))
{ 1.875, 1.5625 }
lua> D.sum = D.sum2 -- eps field is quadrature sum
lua> pprint(1/(1/R1 + 1/R2))
{ 1.875, 1.7757590806469215 }
lua> D.sum = D.sum3 -- eps field is ball arithmetic
lua> pprint(1/(1/R1 + 1/R2))
{ 1.875, 2.40625 }

lua> D.sum = D.sum0 -- restore to normal dual

(*) Fixing sign is trivial. (D.new, use abs if not plain sum)
But I think that is really a waste of computing resource.
We should know uncertainly has implicit ± in front anyway.
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 2 Guest(s)