Adaptive Simpson and Romberg integration methods
|
03-24-2021, 08:54 PM
(This post was last modified: 03-26-2021 08:19 PM by robve.)
Post: #1
|
|||
|
|||
Adaptive Simpson and Romberg integration methods
Moved this post to this new thread as suggested by others
I assume most readers are familiar with integration and quadratures. The methods and results reported here should not be surprising to anyone. Nevertheless, I hope this answers some of the questions frequently raised about numerical integration with these methods and how they compare. As originally intended, this post is to be viewed in the context of vintage pocket computers with limited hardware, to find an approximation fast (within 1E-5 for example) and then we will to take a look what it would take to increase the error tolerance to say 1E-9 and what the pitfalls are of these methods. Many modern and better integration methods exist that I am very familiar with, but those may not run on machines with just 1K~2K of RAM for programs and data. These little machine's could run Simpson's rule that people could enter in BASIC, e.g. from listings in accompanying "application books" and from magazines. Likewise, this post includes listings for Romberg and Adaptive Simpson that will run on these little machines. Adaptive Simpson is one of my favorite integration methods (on those machines). Adaptive Simpson is usually much more efficient than composite Simpson's rule and Romberg methods, since it uses fewer function evaluations in places where the integrand is well-approximated by a cubic function. "Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner." - https://en.wikipedia.org/wiki/Adaptive_S...27s_method The HP-71B Math Pac uses the Romberg method. One advantage of Romberg over Adaptive Simpson is its error estimate, which is based on the error of a high-degree interpolating polynomial constructed for the integrand. This error estimate can be reliably used, unless the integrand is much "too spiky", that is, has high order large nonzero derivatives, such in step functions and square and sawtooth wave functions that have non-differentiable points. Nevertheless, the final error estimate when convergence is not reached is rarely reported by an integration method. The listings at the end of this post include code to report the final error estimate for non-convergence. For this post I will use true and tested implementations of Adaptive Simpson (qasi) and Romberg (qromb and qromo) that I wrote based on optimized versions of published algorithms in reputable sources such as Numerical Recipes and NIST FORTRAN programs. I believe these better and safer than those found in places like Wikipedia that have errors in the listings. I've tested these implementations and compared the results and the number of function evaluations to other Romberg integrators to corroborate the results. Nevertheless, there are many other good implementations possible. The listings of the implementations in C and in BASIC are further discussed below if you are interested and want to reproduce the results. I received some feedback and questions about the Romberg implementations in SHARP BASIC that I showed and used in earlier posts. I will use the same programs here, but written in C for speed. The BASIC implementations are derived from these C versions and also included at the end of this post. The integration methods take 5 parameters: the function f, the integration range a to b (finite), the maximum number of iterations n, and the error tolerance eps. There are two versions of Romberg, one for closed intervals (qromb) and one for open intervals (qromo). To get started, we compute \( \int_{-1}^1 p(x)\,dx \) and collect the average number of integrand evaluations over 1000 different polynomials \( p(x) \) of degrees 1 to 30 to reach convergence: $$ \begin{array}{rrrr} \textit{eps} & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 10^{-3} & 36 & 96 & 23 \\ 10^{-5} & 72 & 234 & 62 \\ 10^{-7} & 127 & 448 & 195 \\ 10^{-9} & 198 & 887 & 606 \\ \end{array} $$ The averages are rounded to the nearest integer. Polynomials are randomly generated with coefficients \( -1\le a_i\le 1 \). For polynomials (and many other functions), Adaptive Simpson converges quickly if the error threshold eps is not too tight to allow lower precision. Otherwise, Romberg over closed intervals appears to be best to achieve high accuracy for polynomials. Not entirely surprising because the methods essential perform polynomial interpolation and extrapolation to determine the definite integral. In practice, Adaptive Simpson is really, really good to get a quick approximation, with only two minor drawbacks: 1. Adaptive Simpson may not produce usable error estimates of the result when the function is not well approximated on points by a cubic. (note: modifications are possible that produce an error estimate, but we're talking about the straightforward method). 2. Adaptive Simpson uses closed intervals and may fail to integrate improper integrals. Improper integrals require integration over an open interval to avoid evaluating the endpoints, for example using Romberg with midpoint quadratures, see e.g. Numerical Recipes 2nd Ed. Ch.4.4 qromo. If the integrand has singularities (non-evaluable points), then one can split up the interval in parts and use a quadrature suitable for open intervals to evaluate the parts separately. This may sometimes not be necessary because equidistant methods like Romberg can skip over them if you can control the points carefully. Adaptive Simpson will very likely not work, because it may subdivide its intervals until it hits the problematic points. On the other hand, open intervals may not be necessary for integrands that produce a valid value at the endpoints, such as \( \int_0^1 \arctan(1/x)\,dx \) for example that evaluates to \( \pi/2 \) at \( x=0 \). Evaluation of this function at \( x=0 \) is fine as long as no floating point exception is raised to stop the program, because \( \arctan\infty = \frac{\pi}{2} \). Further, if the integrand's value at an endpoint is zero in the limit, then the floating point exception can be ignored (zero does not contribute). I should add that Romberg implementations terminate at convergence, which can be too early if the number of points evaluated is insufficient to accurately assess convergence. I made sure that in my implementations the Romberg methods iterate at least 3 times (9 trapezoidal points) and 2 times (9 midpoints). While not always necessary, this increases the confidence we have in the error estimate when reaching convergence. Next we evaluate \( \int_0^1 f(x)\,dx \) for a variety of functions \( f(x) \) with different characteristics. In the next test, ten "regular" functions are tested (1 to 10) and six extreme ones (11 to 16) to hit the methods hard. For the hard cases no convergence can be expected. To test the extremes and make the methods work hard, we set the parameters fairly high: the Romberg qromb method tested performs at most N=23 subdivisions to evaluate the function up to 4 194 305 times. The Romberg qromo method tested performs at most N=15 subdivisions to evaluate the function up to 4 782 969 times. The Adaptive Simpson qasi method tested is recursive up to a depth of 21, which when evaluated in full performs 221+1+1 = 4 194 305 function evaluations. The following table shows the number of function evaluations \( nnn \) performed by each method in the three columns for each of the 16 functions tested with tolerance \( 10^{-9} \), annotated with success or failure of the method to produce the desired result: - \( nnn \) result with an acceptable error within the given tolerance \( 10^{-9} \) - \( nnn^= \) exact result within \( 10^{-15} \) - \( nnn^* \) max iterations or adaptive depth reached, residual error still within \( 10^{-9} \) - \( nnn^e \) max iterations or adaptive depth reached, high residual error within \( 10^{-5} \) - \( nnn^\dagger \) failed with incorrect result - \( \dagger \) fatal: failed with NaN \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 1 & x & \frac12 & 9^= & 9^= & 5^= \\ 2 & x^3-2x^2+x & \frac{1}{12} & 9^= & 9^= & 5^= \\ 3 & 4/(1+x^2) & \pi & 65 & 243 & 153 \\ 4 & 1/(1+x) & \log2 & 33 & 243 & 97 \\ 5 & \textrm{e}^x & \textrm{e}-1 & 17 & 81 & 65 \\ 6 & \sin(x\pi) & \frac{2}{\pi} & 65 & 243 & 217 \\ 7 & \arccos x & 1 & 262\,145 & 531\,441 & 489^* \\ 8 & \arctan(1/x) & (2\log2+\pi)/4 & 33 & 81 & 93 \\ 9 & \log x & -1 & \dagger & 4\,782\,969^e & \dagger \\ 10 & 16x^4\log(2x+\sqrt{4x^2+1}) & 4.0766820599 & 65 & 243 & 449 \\ 11 & \lfloor 10x\rfloor \mod 2 & \frac12 & 65 & 4\,782\,969^e & 705^* \\ 12 & \lfloor 8x\rceil \mod 2 & \frac12 & 257 & 4\,782\,969^e & 5^\dagger \\ 13 & \lfloor 6x\rceil \mod 2 & \frac12 & 65 & 4\,782\,969^e & 441 \\ 14 & \lfloor 10x\rfloor & \frac12 & 4\,194\,305^e & 4\,782\,969^e & 441^e \\ 15 & x - \lfloor 10x\rfloor & 4\frac12 & 4\,194\,305^e & 4\,782\,969^e & 441^\dagger \\ 16 & \textrm{random} & \approx\frac12 & 4\,194\,305^e & 4\,782\,969^e & 4\,194\,305^\dagger \\ \end{array} Trigs are evaluated in radians, log has base e and \( \lfloor x \rceil \) means round to nearest integer. Integrals 1 and 2 are exact, as can be expected, with 9 point evaluation for Romberg to ensure accuracy (otherwise only 3 points are evaluated for integrals 1 and 2). Integral 2 over a cubic requires 5 points for Adaptive Simpson which is exact for cubics. Integrals 3 to 7 are proper and pose no significant issues. Integrals 8 and 9 are improper, but 8 can be evaluated on a closed interval. Integral 10 is from Numerical Recipes for reference and verification. Integrals 11 to 15 have spiky integrands and are "integration busters". Integrands 11 to 13 are square wave functions that can fool a method's convergence check. Integrand 14 is a sawtooth function and integrand 15 is a step function. When we tally the results for this experiment, we see that Romberg wins if we count the fewest function evaluations to obtain a reliable result within 10-6 relative error. Not surprising because we set the error threshold to 10-9 to make these methods work hard. On the other hand, even in this more extreme case Adaptive Simpson is still fast and very good most of the time as long as the functions are well behaved, i.e. can be approximated by a cubic at the points evaluated. While this is not an exhaustive comparison, it shows the successes and pitfalls of integration methods and that comparisons are sometimes apples to oranges. It depends on the integrand and interval what method we should to pick. More modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. For the remainder of the post I will comment on the C and BASIC implementations of qromb, qromo, and qasi. You may freely use any of the code here and please ask questions or PM me if something is not clear. You may ask how it is possible to implement Adaptive Simpson in old BASIC dialects when BASIC has no recursive functions and a limited call stack depth for GOSUB? It is not too difficult when we use a stack to save and restore the recursive call parameters. First, we write the code in C for clarity: Code: double qasi(double (*f)(double), double a, double b, double eps) { Next, we optimize for tail-recursion by passing along the grand total t as a variable to be updated: Code: double qas_(double (*f)(double), double a, double b, double fa, double fm, double fb, double v, double eps, int n, double t) { Finally, we convert the recursive version to a non-recursive version using a stack stk and stack pointer sp, where we also replaced the check for depth n <= 0 with a comparison on eps that is halved at each level (making one or the other redundant): Code: double qas_(double (*f)(double), double a, double b, double fa, double fm, double fb, double v, double eps, int n, double t, int sp, double stk[]) { The Adaptive Simpson BASIC program, here for the SHARP PC-1350, but easily converted to other BASIC dialects such as HP-71B, note that A(27..166) is SHARP array (indexed by stack pointer O) that does not require DIM: Code: ' A,B range The Romberg integration methods in C as used in the experiments with n=23 and n=15 for qromb and qromo, respectively, and eps=1e-9: Code: double qromb(double (*f)(double), double a, double b, int n, double eps) { The same Romberg integration methods in BASIC, with lower default N=10 for 512 function evaluations max and N=7 for 729 point function evaluations max to limit the CPU time in case of slow or non-convergence: Code: ' VARIABLES Hope this is useful. Enjoy! - Rob Edit: minor additions to clarify n and N values in the implementations. Edit2: the initial post assumes readers are familiar with integration methods; I added some paragraphs and table to help casual readers. "I count on old friends to remain rational" |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)