(10-09-2021 08:37 PM)robve Wrote: (10-09-2021 02:27 AM)Paul Dale Wrote: It's neither complex nor state of the art.

How many function evaluations should be performed?

Clearly one evaluation is sufficient to get the correct answer here. Two might be prudent to verify that it is a constant function. Three perhaps?

There is never a guarantee that a given function is constant by probing a few points. That should be obvious, no?

Naive implementations of Gauss-Legendre quadrature methods may stop at two or three points to fit a constant (polynomial). However, it would be detrimental to give up that early. Evidently, calculators run quite a bit longer on this function for a good reason.

My point about Tanh-Sinh was simply that this method doesn't require a huge number of points to converge accurately and it is used in the WP 34S... so make your own conclusions. Tanh-Sinh simply works well for this type of function based on its syntactical structure (duh). More importantly, the function is NOT numerically constant in its present non-simplified form, which is to be expected: noise increases quickly towards -10 and beyond. This noise is already in the order of 10^-2 at x=-18. Tabulating the error in Excel from -10 to -8.9 gives:

-5.31248E-08

-6.01538E-08

-5.39791E-08

-1.77564E-08

-1.86315E-08

-1.64802E-08

-5.18465E-10

-8.54069E-09

-1.47008E-08

1.40072E-08

-6.26628E-09

-5.77673E-09

This amount of noise is sufficient to trip up quadrature methods. Of course, the noise may differ with non-IEEE 754 double precision and different implementations of the constituent functions EXP, SINH and COSH. However, a general consequence of the noise and floating point limitations, you can either get lucky to get to the exact result 40 with a few points or unlucky, which can cost you a great deal of time wasted to evaluate points in the presence of noise. The WP 34S and qthsh points reported show exactly what I mean. Also, attempting to push the accuracy beyond 10^8 (or about) is pointless. There is too much noise to make definitive conclusions.

- Rob

Going variable precision shows typically COSH()-SINH() creates noise in the last 8 or 9 digits. So if your precision is 16 digits (like Excel), you end up with the 10^-8 that you correctly recorded.

So I guess the trick is as long as you don't ask the integration algorithm for an error smaller than the system precision less 9 digits, you'll be fine and get a nice result, with an error in the order of (precision-9), regardless of the tolerance you requested. For example, I set newRPL to 120 digits and request 1E-60 tolerance, and adaptive Simpson very quickly returns 40 with 112 correct digits and noise in the last 8. With the standard 32 digits I get an error of 10^-23. If I request a tolerance below 10^-23, the algorithm will literally "hang" in an infinite loop trying to refine the step unsuccessfully.