Post Reply 
Error margins
11-09-2024, 09:05 PM
Post: #21
RE: Error margins
(10-27-2024 01:45 PM)Maximilian Hohmann Wrote:  Hello,

over the last two days we have been treated to fascinating demonstations and insights into Christophe de Dinechin's ongoing development and refinement of his DB48X project. When you see the developer present his product you instinctively think: "This is the calculator that I want to have!", even if you don't really need it :-)

On my way home I thought about it and the arbitrary precision arithmetic implemented and how that relates to real life uses. During my study time we had to do eperiments / measurements, e.g. some wind-tunnel measurements, process the data and write a report about it. This always had to include an error analyis, which - for me at least - was the most difficult and tedious part.

I have no idea if that already exists somewhere (I guess it does but I don't have seen it yet), but a data type "Real World Number" that includes a value, an error margin, the unit (optional) and maybe the sensor designation so that you don't have to enter the data every time.
For example the datasheet of an arbitrarily selected digital laboratory thermometer for the range between -50°C and 250°C states a typical error of +/-0.648°C. So a real world measurement input of 33.5°C from that sensor might look like this: [33.5;-0.648;0.648;°C;"MCC USB-TC"]

Interesting. A related idea is "inverval arithmetic". The basic idea is to have arithmetic on intervals Low...High, where you do computations on Low rounding down, and on High rounding up.

Getting this right is quite difficult. Consider something as simple as the sin() function. Internally, this is implemented with a number of alternating additions and subtractions, and depending on the sign, figuring out how you need to setup your rounding is not a trivial problem. Also, since these functions are periodic, as soon as the input interval is wider than 2π, the only valid output is -1...1 which is not super useful. Digit cancellation is problematic. And so on.

There is a reason error analysis in real life is hard ;-)

Quote:Now wouldn't it be useful if the calculator could "propagate" that error through all the calculations done with data of this type and, combined with the other input data and their own error margins, give an end result that contains the total error margin? Again, this may already exist in some form, but I have yet so see it.

Regards
Max

DB48X,HP,me
Find all posts by this user
Quote this message in a reply
11-11-2024, 10:18 AM (This post was last modified: 11-12-2024 06:03 AM by Csaba Tizedes.)
Post: #22
RE: Error margins
(10-27-2024 01:45 PM)Maximilian Hohmann Wrote:  Now wouldn't it be useful if the calculator could "propagate" that error through all the calculations done with data of this type and, combined with the other input data and their own error margins, give an end result that contains the total error margin?

Again, this may already exist in some form, but I have yet so see it.

Hello, here is a full set of programs for HP-32SII for handling numbers with errors (for help for error propagation calculations) - written approx quarter century before Wink
I hope you're not ignored because it is full Hungarian Big Grin Wink just nJoy!
Your problem solved - almost, because these routines for one-step operations only.

https://drive.google.com/file/d/0B1AqWV7...TTMsEVUD4Q

Csaba
Find all posts by this user
Quote this message in a reply
11-11-2024, 11:41 AM
Post: #23
RE: Error margins
(11-09-2024 09:05 PM)c3d Wrote:  Interesting. A related idea is "inverval arithmetic". The basic idea is to have arithmetic on intervals Low...High, where you do computations on Low rounding down, and on High rounding up.

Getting this right is quite difficult. Consider something as simple as the sin() function. Internally, this is implemented with a number of alternating additions and subtractions, and depending on the sign, figuring out how you need to setup your rounding is not a trivial problem. Also, since these functions are periodic, as soon as the input interval is wider than 2π, the only valid output is -1...1 which is not super useful. Digit cancellation is problematic. And so on.

There is a reason error analysis in real life is hard ;-)

Packages for RPN x42 (uprop, see above), Python (uncertainties), Julia (Measurements.jl) are using same assumptions and simplifications as described in "Guide to the Expression of Uncertainty in Measurement" (GUM) and shortly summarized in https://en.wikipedia.org/wiki/Propagatio...lification.

Your comment on sin function is true. Some comments in uncertainties/umath_core.py are insightful:

Quote:########################################
# Wrapping of math functions:

# Fixed formulas for the derivatives of some functions from the math
# module (some functions might not be present in all version of
# Python). Singular points are not taken into account. The user
# should never give "large" uncertainties: problems could only appear
# if this assumption does not hold.

# Functions not mentioned in _fixed_derivatives have their derivatives
# calculated numerically.

# Functions that have singularities (possibly at infinity) benefit
# from analytical calculations (instead of the default numerical
# calculation) because their derivatives generally change very fast.
# Even slowly varying functions (e.g., abs()) yield more precise
# results when differentiated analytically, because of the loss of
# precision in numerical calculations.

(btw: sin is key of fixed_derivatives dictionary)

Because db48x supports derivation and different datatypes, it would be a question how to define a custom data type, if 'uprop' needs to be programmed with RPL.
Find all posts by this user
Quote this message in a reply
11-11-2024, 12:10 PM
Post: #24
RE: Error margins
(11-09-2024 09:05 PM)c3d Wrote:  There is a reason error analysis in real life is hard ;-)

We assumed *all* variables are independent, which may not be true.
With complicated expression, assumption of independence may not hold.

Example: x / x = 1, exactly, not possibly doubled x errors.

Equivalent expressions may give very different estimated errors.

lua> D = require'dual'.D
lua> D.sum = D.sum2 -- quadrature sum
lua> R1, R2 = D.new(3,4), D.new(5,6)

lua> pprint(1/(1/R1+1/R2))
{ 1.875, 1.7757590806469215 }
lua> pprint(R1*R2/(R1+R2))
{ 1.875, 3.764165951774709 }
Find all posts by this user
Quote this message in a reply
11-11-2024, 12:46 PM (This post was last modified: 11-11-2024 03:02 PM by raprism.)
Post: #25
RE: Error margins
(11-11-2024 12:10 PM)Albert Chan Wrote:  
(11-09-2024 09:05 PM)c3d Wrote:  There is a reason error analysis in real life is hard ;-)

We assumed *all* variables are independent, which may not be true.
With complicated expression, assumption of independence may not hold.

Example: x / x = 1, exactly, not possibly doubled x errors.

Equivalent expressions may give very different estimated errors.

lua> D = require'dual'.D
lua> D.sum = D.sum2 -- quadrature sum
lua> R1, R2 = D.new(3,4), D.new(5,6)

lua> pprint(1/(1/R1+1/R2))
{ 1.875, 1.7757590806469215 }
lua> pprint(R1*R2/(R1+R2))
{ 1.875, 3.764165951774709 }

Do you mean x/y = const+-0, because x and y are strictly correlated (+1)?

Some mitigation might be to check for correlations (if derived from data sets in calc) with warnings, apply corrections for correlations.

For the other example I would say that using multimeters and resistors with real life characteristics helps a lot ;-)
Edit: With proper derivative calculations also for your example the result is 1.875+/-1.776 irrespective of expression variants.

Simple to use tools/libraries with support of units and uncertainties are still great for quick calculations in labs. For more sophistication one could also use Monte Carlo simulations (see e.g. mcerp Python package).
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 8 Guest(s)