Error propagation in adaptive Simpson algorithm
07-30-2018, 06:42 PM (This post was last modified: 07-30-2018 06:52 PM by Claudio L..)
Post: #1
 Claudio L. Senior Member Posts: 1,883 Joined: Dec 2013
Error propagation in adaptive Simpson algorithm
I recently implemented an adaptive Simpson, almost straight from Wikipedia's article.

I immediately noticed that on every test I threw at it, it produces a result that is consistently 2 or 3 digits more accurate than requested.
Since the algorithm halves the step on each pass, every additional digit requested needs roughly 3 more "splits". Roughly speaking, each time it splits the interval in 2, we need twice as many function evaluations as the step before, so those last 3 to 6 extra splits can be extremely costly in terms of time.

The first thing I notice is that this algorithm halves the requested tolerance every time it splits. Basically it's saying Error=Error_Left+Error_Right, therefore they want Error_Left and Error_Right to be less than half the requested tolerance to make sure it meets the requirement.
This seems to me quite conservative, once you are doing hundreds or thousands of steps, shouldn't the total error be proportional to the sqrt(N)*AvgError with N being the number of intervals considered?
If so, we could perhaps use 1/sqrt(2)=70%, and whenever we split an interval in half, simply request 70% of the error instead of 50%.

And there's also a magic factor 15 that I don't know where is coming from, perhaps that's another factor to play with.

EDIT:
For example (from another thread): Integral of exp(-x) between 0 and 500.
If I request 1e-8 tolerance, the result is:
1.00000000000704832....
This case has 2 more digits than requested.
 « Next Oldest | Next Newest »