Adaptive Simpson and Romberg integration methods
|
03-24-2021, 08:54 PM
(This post was last modified: 03-26-2021 08:19 PM by robve.)
Post: #1
|
|||
|
|||
Adaptive Simpson and Romberg integration methods
Moved this post to this new thread as suggested by others
I assume most readers are familiar with integration and quadratures. The methods and results reported here should not be surprising to anyone. Nevertheless, I hope this answers some of the questions frequently raised about numerical integration with these methods and how they compare. As originally intended, this post is to be viewed in the context of vintage pocket computers with limited hardware, to find an approximation fast (within 1E-5 for example) and then we will to take a look what it would take to increase the error tolerance to say 1E-9 and what the pitfalls are of these methods. Many modern and better integration methods exist that I am very familiar with, but those may not run on machines with just 1K~2K of RAM for programs and data. These little machine's could run Simpson's rule that people could enter in BASIC, e.g. from listings in accompanying "application books" and from magazines. Likewise, this post includes listings for Romberg and Adaptive Simpson that will run on these little machines. Adaptive Simpson is one of my favorite integration methods (on those machines). Adaptive Simpson is usually much more efficient than composite Simpson's rule and Romberg methods, since it uses fewer function evaluations in places where the integrand is well-approximated by a cubic function. "Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner." - https://en.wikipedia.org/wiki/Adaptive_S...27s_method The HP-71B Math Pac uses the Romberg method. One advantage of Romberg over Adaptive Simpson is its error estimate, which is based on the error of a high-degree interpolating polynomial constructed for the integrand. This error estimate can be reliably used, unless the integrand is much "too spiky", that is, has high order large nonzero derivatives, such in step functions and square and sawtooth wave functions that have non-differentiable points. Nevertheless, the final error estimate when convergence is not reached is rarely reported by an integration method. The listings at the end of this post include code to report the final error estimate for non-convergence. For this post I will use true and tested implementations of Adaptive Simpson (qasi) and Romberg (qromb and qromo) that I wrote based on optimized versions of published algorithms in reputable sources such as Numerical Recipes and NIST FORTRAN programs. I believe these better and safer than those found in places like Wikipedia that have errors in the listings. I've tested these implementations and compared the results and the number of function evaluations to other Romberg integrators to corroborate the results. Nevertheless, there are many other good implementations possible. The listings of the implementations in C and in BASIC are further discussed below if you are interested and want to reproduce the results. I received some feedback and questions about the Romberg implementations in SHARP BASIC that I showed and used in earlier posts. I will use the same programs here, but written in C for speed. The BASIC implementations are derived from these C versions and also included at the end of this post. The integration methods take 5 parameters: the function f, the integration range a to b (finite), the maximum number of iterations n, and the error tolerance eps. There are two versions of Romberg, one for closed intervals (qromb) and one for open intervals (qromo). To get started, we compute \( \int_{-1}^1 p(x)\,dx \) and collect the average number of integrand evaluations over 1000 different polynomials \( p(x) \) of degrees 1 to 30 to reach convergence: $$ \begin{array}{rrrr} \textit{eps} & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 10^{-3} & 36 & 96 & 23 \\ 10^{-5} & 72 & 234 & 62 \\ 10^{-7} & 127 & 448 & 195 \\ 10^{-9} & 198 & 887 & 606 \\ \end{array} $$ The averages are rounded to the nearest integer. Polynomials are randomly generated with coefficients \( -1\le a_i\le 1 \). For polynomials (and many other functions), Adaptive Simpson converges quickly if the error threshold eps is not too tight to allow lower precision. Otherwise, Romberg over closed intervals appears to be best to achieve high accuracy for polynomials. Not entirely surprising because the methods essential perform polynomial interpolation and extrapolation to determine the definite integral. In practice, Adaptive Simpson is really, really good to get a quick approximation, with only two minor drawbacks: 1. Adaptive Simpson may not produce usable error estimates of the result when the function is not well approximated on points by a cubic. (note: modifications are possible that produce an error estimate, but we're talking about the straightforward method). 2. Adaptive Simpson uses closed intervals and may fail to integrate improper integrals. Improper integrals require integration over an open interval to avoid evaluating the endpoints, for example using Romberg with midpoint quadratures, see e.g. Numerical Recipes 2nd Ed. Ch.4.4 qromo. If the integrand has singularities (non-evaluable points), then one can split up the interval in parts and use a quadrature suitable for open intervals to evaluate the parts separately. This may sometimes not be necessary because equidistant methods like Romberg can skip over them if you can control the points carefully. Adaptive Simpson will very likely not work, because it may subdivide its intervals until it hits the problematic points. On the other hand, open intervals may not be necessary for integrands that produce a valid value at the endpoints, such as \( \int_0^1 \arctan(1/x)\,dx \) for example that evaluates to \( \pi/2 \) at \( x=0 \). Evaluation of this function at \( x=0 \) is fine as long as no floating point exception is raised to stop the program, because \( \arctan\infty = \frac{\pi}{2} \). Further, if the integrand's value at an endpoint is zero in the limit, then the floating point exception can be ignored (zero does not contribute). I should add that Romberg implementations terminate at convergence, which can be too early if the number of points evaluated is insufficient to accurately assess convergence. I made sure that in my implementations the Romberg methods iterate at least 3 times (9 trapezoidal points) and 2 times (9 midpoints). While not always necessary, this increases the confidence we have in the error estimate when reaching convergence. Next we evaluate \( \int_0^1 f(x)\,dx \) for a variety of functions \( f(x) \) with different characteristics. In the next test, ten "regular" functions are tested (1 to 10) and six extreme ones (11 to 16) to hit the methods hard. For the hard cases no convergence can be expected. To test the extremes and make the methods work hard, we set the parameters fairly high: the Romberg qromb method tested performs at most N=23 subdivisions to evaluate the function up to 4 194 305 times. The Romberg qromo method tested performs at most N=15 subdivisions to evaluate the function up to 4 782 969 times. The Adaptive Simpson qasi method tested is recursive up to a depth of 21, which when evaluated in full performs 221+1+1 = 4 194 305 function evaluations. The following table shows the number of function evaluations \( nnn \) performed by each method in the three columns for each of the 16 functions tested with tolerance \( 10^{-9} \), annotated with success or failure of the method to produce the desired result: - \( nnn \) result with an acceptable error within the given tolerance \( 10^{-9} \) - \( nnn^= \) exact result within \( 10^{-15} \) - \( nnn^* \) max iterations or adaptive depth reached, residual error still within \( 10^{-9} \) - \( nnn^e \) max iterations or adaptive depth reached, high residual error within \( 10^{-5} \) - \( nnn^\dagger \) failed with incorrect result - \( \dagger \) fatal: failed with NaN \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 1 & x & \frac12 & 9^= & 9^= & 5^= \\ 2 & x^3-2x^2+x & \frac{1}{12} & 9^= & 9^= & 5^= \\ 3 & 4/(1+x^2) & \pi & 65 & 243 & 153 \\ 4 & 1/(1+x) & \log2 & 33 & 243 & 97 \\ 5 & \textrm{e}^x & \textrm{e}-1 & 17 & 81 & 65 \\ 6 & \sin(x\pi) & \frac{2}{\pi} & 65 & 243 & 217 \\ 7 & \arccos x & 1 & 262\,145 & 531\,441 & 489^* \\ 8 & \arctan(1/x) & (2\log2+\pi)/4 & 33 & 81 & 93 \\ 9 & \log x & -1 & \dagger & 4\,782\,969^e & \dagger \\ 10 & 16x^4\log(2x+\sqrt{4x^2+1}) & 4.0766820599 & 65 & 243 & 449 \\ 11 & \lfloor 10x\rfloor \mod 2 & \frac12 & 65 & 4\,782\,969^e & 705^* \\ 12 & \lfloor 8x\rceil \mod 2 & \frac12 & 257 & 4\,782\,969^e & 5^\dagger \\ 13 & \lfloor 6x\rceil \mod 2 & \frac12 & 65 & 4\,782\,969^e & 441 \\ 14 & \lfloor 10x\rfloor & \frac12 & 4\,194\,305^e & 4\,782\,969^e & 441^e \\ 15 & x - \lfloor 10x\rfloor & 4\frac12 & 4\,194\,305^e & 4\,782\,969^e & 441^\dagger \\ 16 & \textrm{random} & \approx\frac12 & 4\,194\,305^e & 4\,782\,969^e & 4\,194\,305^\dagger \\ \end{array} Trigs are evaluated in radians, log has base e and \( \lfloor x \rceil \) means round to nearest integer. Integrals 1 and 2 are exact, as can be expected, with 9 point evaluation for Romberg to ensure accuracy (otherwise only 3 points are evaluated for integrals 1 and 2). Integral 2 over a cubic requires 5 points for Adaptive Simpson which is exact for cubics. Integrals 3 to 7 are proper and pose no significant issues. Integrals 8 and 9 are improper, but 8 can be evaluated on a closed interval. Integral 10 is from Numerical Recipes for reference and verification. Integrals 11 to 15 have spiky integrands and are "integration busters". Integrands 11 to 13 are square wave functions that can fool a method's convergence check. Integrand 14 is a sawtooth function and integrand 15 is a step function. When we tally the results for this experiment, we see that Romberg wins if we count the fewest function evaluations to obtain a reliable result within 10-6 relative error. Not surprising because we set the error threshold to 10-9 to make these methods work hard. On the other hand, even in this more extreme case Adaptive Simpson is still fast and very good most of the time as long as the functions are well behaved, i.e. can be approximated by a cubic at the points evaluated. While this is not an exhaustive comparison, it shows the successes and pitfalls of integration methods and that comparisons are sometimes apples to oranges. It depends on the integrand and interval what method we should to pick. More modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. For the remainder of the post I will comment on the C and BASIC implementations of qromb, qromo, and qasi. You may freely use any of the code here and please ask questions or PM me if something is not clear. You may ask how it is possible to implement Adaptive Simpson in old BASIC dialects when BASIC has no recursive functions and a limited call stack depth for GOSUB? It is not too difficult when we use a stack to save and restore the recursive call parameters. First, we write the code in C for clarity: Code: double qasi(double (*f)(double), double a, double b, double eps) { Next, we optimize for tail-recursion by passing along the grand total t as a variable to be updated: Code: double qas_(double (*f)(double), double a, double b, double fa, double fm, double fb, double v, double eps, int n, double t) { Finally, we convert the recursive version to a non-recursive version using a stack stk and stack pointer sp, where we also replaced the check for depth n <= 0 with a comparison on eps that is halved at each level (making one or the other redundant): Code: double qas_(double (*f)(double), double a, double b, double fa, double fm, double fb, double v, double eps, int n, double t, int sp, double stk[]) { The Adaptive Simpson BASIC program, here for the SHARP PC-1350, but easily converted to other BASIC dialects such as HP-71B, note that A(27..166) is SHARP array (indexed by stack pointer O) that does not require DIM: Code: ' A,B range The Romberg integration methods in C as used in the experiments with n=23 and n=15 for qromb and qromo, respectively, and eps=1e-9: Code: double qromb(double (*f)(double), double a, double b, int n, double eps) { The same Romberg integration methods in BASIC, with lower default N=10 for 512 function evaluations max and N=7 for 729 point function evaluations max to limit the CPU time in case of slow or non-convergence: Code: ' VARIABLES Hope this is useful. Enjoy! - Rob Edit: minor additions to clarify n and N values in the implementations. Edit2: the initial post assumes readers are familiar with integration methods; I added some paragraphs and table to help casual readers. "I count on old friends to remain rational" |
|||
03-24-2021, 10:19 PM
Post: #2
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-24-2021 08:54 PM)robve Wrote: On the other hand, open intervals may not be necessary for integrands that produce a valid value at the endpoints, such as \( \int_0^1 \arctan(1/x)\,dx \) for example that evaluates to \( \pi/2 \) at \( x=0 \). Evaluation of this function at \( x=0 \) is fine as long as no floating point exception is raised to stop the program, because \( \arctan\infty = \frac{\pi}{2} \). FYI, you can avoid infinity by doing atan(1/x) = sign(x)*pi/2 - atan(x) ∫(atan(1/x), x=0 .. 1) = pi/2 - ∫(atan(x), x=0 .. 1) This integral can also easily evaluated symbolically: ∫(atan(1/x) dx) // u=atan(1/x), v=x = atan(1/x)*x + ∫(x/(1+x²) dx) // ∫u dv = u*v - ∫v du = atan(1/x)*x + 1/2 * ∫(1/y dy) // y=1+x², dy = 2x dx = atan(1/x)*x + 1/2 * ln(1+x²) ∫(atan(1/x), x=0 .. 1) = pi/4 + ln(2)/2 |
|||
03-24-2021, 11:11 PM
Post: #3
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-24-2021 08:54 PM)robve Wrote: Improper integrals require integration over an open interval to avoid evaluating the endpoints, for example using Romberg with midpoint quadratures, see e.g. Numerical Recipes 2nd Ed. Ch.4.4 qromo and as implemented in the HP-71B Math Pac. No. closed-intervals is fine, if we can assume end-points really is zero. Normally, u-transformation will achieve it, but not always ... A counter-example, from one of my old thread. Expected: \(\displaystyle \int _{-1}^1 {dx \over \sqrt{1-x^2}} = asin(x)\bigg| _{-1}^1 = \pi\) (02-06-2020 11:16 PM)Albert Chan Wrote: A simple test showed that HP71B INTEGRAL do extrapolations from trapezoids, not mid-points rule. |
|||
03-24-2021, 11:51 PM
(This post was last modified: 03-25-2021 01:28 AM by robve.)
Post: #4
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-24-2021 10:19 PM)Albert Chan Wrote: FYI, you can avoid infinity by doing atan(1/x) = sign(x)*pi/2 - atan(x) Quote right. Point taken. But not everyone will see or understand your smart change. I picked this example a bit arbitrarily to show that there are cases when the integral may be not be evaluable at an endpoint when in fact it is if no floating point exception is raised. If an exception is raised, then there are two options: either rewrite the integrand (if possible) or use quadratures that are applicable to open intervals. - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 01:58 AM
(This post was last modified: 03-25-2021 01:02 PM by robve.)
Post: #5
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-24-2021 11:11 PM)Albert Chan Wrote:(03-24-2021 08:54 PM)robve Wrote: Improper integrals require integration over an open interval to avoid evaluating the endpoints, for example using Romberg with midpoint quadratures, see e.g. Numerical Recipes 2nd Ed. Ch.4.4 qromo and as implemented in the HP-71B Math Pac. Makes sense. Let me add to this that if the value at the endpoint is zero then it can certainly be "ignored". A closed interval integration method can be used by ignoring such floating point exceptions. So yes, that works in those cases. However, this is a specific case for zero. We're considering any finite limiting value at one or both of the endpoints. A integral is improper and may require open intervals if its integrand goes to a finite limiting value at its finite upper and lower limits (integral endpoint), but cannot be evaluated right on one of those limits, for example \( \frac{\sin x}{x} \) at \( x=0 \) since \( \lim_{x\rightarrow 0} \frac{\sin x}{x} = 1 \). Weird that HP-71B Math Pac uses trapezoidal quadratures when trying to exclude endpoints. The recommended Romberg modification is to use an open quadrature as the basis to replace (the popular) trapezoidal, while thereby preserving the nice properties of Romberg. Numerical Recipes 2nd ed thus gives a generalization of the Romberg algorithm with open quadratures, but there is no specific implementation given, which with some effort can be derived, e.g. as qromo in the post (my qromo is not the same as the generic qromo in NR 2nd ed, and differ significantly structurally). The midpoint version qromo essentially performs high-degree polynomial extrapolation to obtain the finite values at the (missing) endpoints of the integration interval. The integral's precision is about as high as the trapezoidal version, despite the fact that the integrand is never evaluated at the endpoints, when considering the same number of points. Note the ratio of n subdivisions is about 2:3 for midpoint versus trapezoidal to get very roughly the same number of points in each (see post). - Rob Edit: fixed typo and last sentence added. "I count on old friends to remain rational" |
|||
03-25-2021, 12:52 PM
Post: #6
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
Very nice routine! Here's how to do it in a modern BASIC
Code: DECLARE EXTERNAL FUNCTION qasi Tom L Cui bono? |
|||
03-25-2021, 12:54 PM
(This post was last modified: 03-25-2021 01:30 PM by robve.)
Post: #7
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
Albert,
Out of curiosity to see what the difference would be in the number of function evaluations and the error of the result if we forcibly ignore the non-evaluable endpoint at \( x=0 \) for \( \int_0^1 \frac{\sin x}{x} \,dx \) as an example, we get: \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 17 & \sin(x)/x & 0.946083070367 & \dagger & 81 & \dagger \\ 18 & \begin{cases} 0 & x=0 \\ \sin(x)/x & x\neq 0 \end{cases} & 0.946083070367 & 4\,194\,305^e & 81 & 109^\dagger \\ \end{array} - \( nnn \) result within the specified \( 10^{-9} \) - \( nnn^e \) max iterations or adaptive depth reached, high residual error within \( 10^{-5} \) - \( \dagger \) failed with NaN - \( nnn^\dagger \) failed with incorrect result Integral 17 produces NaN because of the fp exception at \( x=0 \) for both Romberg closed interval qromb and Adaptive Simpson qasi. This is not surprising of course. Romberg open interval qromo produces the exact value up to 12 digits(!) even when we specified eps=1E-9. Integral 18 attempts to ignore the endpoint's evaluation by setting it to zero in the integrand as a "hack". This simulates an implementation in which exceptions and NaN are replaced by a zero integrand value internally as a misguided measure to fix the problem. In this case the integrand value in the limit is \( \lim_{x\rightarrow 0}\frac{\sin x}{x} = 1 \) and not zero. Note that Romberg closed interval and Adaptive Simpson's results are both inaccurate. This is not surprising, because of Romberg's interpolating polynomial includes \( (x,y)=(0,0) \) when \( (x,y)=(0,1) \) is correct. Adaptive Simpson also includes the point \( (x,y)=(0,0) \). The errors are not big, but the integration results are misleading (0.94608299775 and 0.946082996192). The Romberg closed interval non-convergence error estimate of the result is also incorrect (7.67547e-08 instead of the 5 significant digits that match, because of the forced zero.) Interestingly, the HP PRIME HOME produces the correct 0.946083070367 and CAS returns Si(1), called the "sine integral of 1". - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 01:52 PM
Post: #8
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
Quote:I should add that Romberg implementations terminate at convergence, which can be too early if the number of points evaluated is insufficient to accurately assess convergence. I made sure that in my implementations the Romberg methods iterate at least 3 times (9 trapezoidal points) and 2 times (9 midpoints). Let's take a look at integrals 12 and 13 over square wave integrands that can fool a quadrature. With the modest modification above we have (see original post): $$ \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 12 & \lfloor 8x\rceil \mod 2 & \frac12 & 257 & 4\,782\,969^e & 5^\dagger \\ 13 & \lfloor 6x\rceil \mod 2 & \frac12 & 65 & 4\,782\,969^e & 441 \\ \end{array} $$ For integral 12, some of the 9 points are on \( y=0 \) and the other are on \( y = 1\). Therefore, the methods continue to subdivide the interval in an attempt to reach convergence. Not so for Adaptive Simpson. The 5 points are all on \( y=0 \) and the method falsely concludes that the integrand is a constant function of zero and thus the integral is incorrectly reported as zero. So what happens if we remove the iteration constraints i > 2 and i > 1 from qromb and qromo, respectively? $$ \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 12^\text{opt} & \lfloor 8x\rceil \mod 2 & \frac12 & 3^\dagger & 4\,782\,969^e & 5^\dagger \\ 13^\text{opt} & \lfloor 6x\rceil \mod 2 & \frac12 & 65 & 3^\dagger & 441 \\ \end{array} $$ - \( nnn \) result with an acceptable error within the given tolerance - \( nnn^e \) max iterations or adaptive depth reached, high residual error within \( 10^{-5} \) - \( nnn^\dagger \) failed with incorrect result After removing the constraints, Romberg open interval fails to produce the correct result because the 3 trapezoidal point values are all \( y=0 \). The same happens with integral 13, but in this case Romberg closed interval fails. The iteration constraints i > 2 and i > 1 in qromb and qromo are the bare minimum that we should demand. The more initial points we evaluate, the more robust we get to integrate these type of integrands. Is it possible to fix Adaptive Simpson in a similar way? - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 03:09 PM
Post: #9
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 01:58 AM)robve Wrote: Weird that HP-71B Math Pac uses trapezoidal quadratures when trying to exclude endpoints. That may be because trapezoids gives better estimate than mid-point rules ? Lets try \(\displaystyle \int_1^e {dx \over x} = \ln(e) - \ln(1) = 1 \) To simplify, we skip u-transformation, and supplied first trapezoid (t1) instead. (we got the same conclusion either way, it is just easier to visualize 1/x) lua> I = require'integ' -- code posted here lua> e = exp(1) lua> f = function(x) return 1/x end lua> t1 = (f(1)+f(e))/2 * (e-1) lua> g = I.tm(f,1,e,t1) -- generate trapezoids and mid-points area (both using the *same* points) lua> for i=1,6 do t,m = g(); print(2^i, (1-m)/(1-t)) end 2 -1.5239160915264984 4 -1.8129951785930594 8 -1.9427734883703107 16 -1.984685010021192 32 -1.996097578762125 64 -1.99901957558112 Trapezoids seems to be twice as good as mid-points (in the opposite direction). When we extrapolate with Romberg, trapezoids are *much* better lua> I.integ(I.simpson(f,1,e,t1)) n T = extrapolated-t M = extrapolated-m error(M)/error(T) 4 1.0003409768323182 0.9937356332778096 --> -18 8 1.000007899493941 0.9996852308223878 --> -40 16 1.0000000801614912 0.9999923219175764 --> -96 32 1.0000000003184022 0.9999999206312571 --> -249 64 1.00000000000046 0.9999999996826735 --> -690 ... n is sub-intervals for trapezoids, which is halved, for M Ignoring end-points, both T and M required n-1 function evaluations. With same sub-intervals, both scheme have comparable errors. Example: error(T32) = 1 - 1.0000000003184022 ≈ −3.18e-10 error(M64) = 1 - 0.9999999996826735 ≈ +3.17e-10 But, getting M64 required doubled the time than T32. |
|||
03-25-2021, 04:48 PM
(This post was last modified: 03-25-2021 05:57 PM by robve.)
Post: #10
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 03:09 PM)Albert Chan Wrote: Trapezoids seems to be twice as good as mid-points (in the opposite direction). Yes and no, as I think you are pointing out. When you use them in isolation then generally yes and when you use them in succession to sum up a quadrature. No when you used them as building blocks for the "extended" open and closed formulas used in Romberg. The best choices for Romberg are the extended trapezoidal rule for closed intervals $$ \int_{x1}^{x_N} f(x)\,dx = h\left[ \frac12 f_1 + f_2 + f_3 + \cdots + f_{N-1} + \frac12 f_N \right] + O\left(\frac{(b-a)^3f''}{N^2}\right) $$ and the extended midpoint rule for open intervals $$ \int_{x1}^{x_N} f(x)\,dx = h\left[ f_{3/2} + f_{5/2} + f_{7/2} + \cdots + f_{N-3/2} + f_{N-1/2} \right] + O\left(\frac{1}{N^2}\right) $$ So there is definitely not a "twice as good" comparison when we consider Romberg integration. Romberg combines the previous interpolated result with the next using extrapolation to produce even better results by cancelling out leading error terms by the combination, in a more general way than who Simpson's rule cancelled out leading error terms, which is by: $$ S = \frac43 S_{2N} - \frac13 S_N $$ to get \( O(1/N^4) \) instead of \( O(1/N^2) \). But Simpson's is a special case, while Romberg is a general. That is the whole point of using Romberg. In fact, for \( k = 2 \) refinements, Romberg is the same as Simpson's rule and much better for larger \( k \). Romberg's \( k \) successive refinements by subdividing the interval has an error series up to but not including \( O(1/N^{2k}) \) for trapezoidal. With the extended midpoint rule, there is a problem with doubling the steps which is not possible to preserve the benefits of the method. Steps are tripled. This does \( \sqrt{3} \) additional work that may be unnecessary instead of \( \sqrt{2} \) for extended trapezoidal rule, but we are now able to reuse the previous results to obtain an error series up to but not including \( O(1/N^{3k}) \) for extended midpoint. Remember that each step triples the number of points (extended midpoint) rather than doubles (extended trapezoidal), so a lower max \( k \) is applicable to midpoint to get the same number of points as trapezoidal by a ratio of 2:3, which to some extent can be experimentally confirmed in the table and in other runs I've tried. The ratio 2:3 is of course idealized. In practice these subtle differences in the evaluated points can throw this off quite easily*) There is a bit more theory behind this, which would make this post a lot longer. An excellent summary can be found in Numerical Recipes Ch. Integration of Functions. Sorry if it was not clear what "trapezoidal" and "midpoint" meant in the original posts. Edit: *) and there is of course the difference in the little bit of unnecessary work that is thrown away. Sometimes the difference can be big, however such as: $$ \begin{array}{cccccc} & f(x) & \int_0^1 f(x)\,dx & \texttt{qromb} & \texttt{qromo} & \texttt{qasi} \\ \hline 19 & x^7-3x^6+5x^5-7x^4+7x^3-5x^2+3x-1 & -0.28690476190 & 17^= & 81^= & 205 \\ \end{array} $$ Edit 2: I briefly read your post in the other thread on trapezoidal and midpoint. I could be mistaken, but I don't think this is correct to do to get (extended) midpoints: for i = 1, n, 2 do m = m + f(a + i*h) end m = m*h -- midpoint area / 2 t = t*0.5 + m -- trapezoid area - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 07:19 PM
Post: #11
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 03:09 PM)Albert Chan Wrote:(03-25-2021 01:58 AM)robve Wrote: Weird that HP-71B Math Pac uses trapezoidal quadratures when trying to exclude endpoints.That may be because trapezoids gives better estimate than mid-point rules ? No, that approach has problems and is not in the right direction, though it could work but obviously not as well. Wes suggested in the thread "HP71B Integral Questions": (02-08-2020 10:46 AM)Wes Loewer Wrote: I'm still wrapping my head around my very long-term misunderstanding of hp's implementation of the Romberg method. So how does this work to use trapezoids but not evaluate the endpoints? Do you just leave out the two outermost trapezoids? Romberg with the extended midpoint rule triples the points in each step by reusing previous points to extrapolate. - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 08:43 PM
Post: #12
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 04:48 PM)robve Wrote: Edit 2: I briefly read your post in the other thread on trapezoidal and midpoint. I could be mistaken, but I don't think this is correct to do to get (extended) midpoints: As you had mentioned, with bisecting intervals, you cannot reuse previous points. As a compromise, I reuse the sum for generating next trapezoids. m = f1 + f3 + f5 + ... + fn-1 Each point have weight of 2h, thus m*h only get half the midpoint area, using n/2 points. Again, m was produced only because it was free, a by-products of getting next t. --- Instead of theory, try ∫(f(x), x=0 .. 1), mid-point rule, from 1 to 3 intervals. m1 = f(1/2) m3 = (f(1/6) + f(1/2) + f(5/6)) / 3 Note that previous point, f(1/2), is reused. Romberg's extrapolation, from 1 to 3 points: M3 = m3 + (m3-m1)/(3²-1) = (3*f(1/6) + 2*f(1/2) + 3*f(5/6)) / 8 Numerically confirm 3:2:3 gives good results: ∫(1/x, x=1 .. e) ≈ (3/(1+(e-1)/6) + 2/(1+(e-1)/2) + 3/(1+(e-1)*5/6))/8 * (e-1) ≈ 0.9969 ≈ 1 Yes, Wes had suggested trisecting intervals, for mid-point rule. https://www.hpmuseum.org/forum/thread-14...#pid127797 https://www.hpmuseum.org/forum/thread-15...#pid134284 |
|||
03-25-2021, 10:13 PM
(This post was last modified: 03-25-2021 10:17 PM by robve.)
Post: #13
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 08:43 PM)Albert Chan Wrote: As you had mentioned, with bisecting intervals, you cannot reuse previous points. It looks like (in the code?) you're aligning the midpoints to derive a closed formula to compare trapezoidal to midpoint and reuse points, right? I got that. It makes sense to do this to compare the errors of the trapezoidal versus midpoint results. I expect the Romberg open interval method to converge slower, not only in practice, but also in theory, as we know by the error behaviors: at each midpoint step the error decreases by 1/9th it size (while evaluating three times as many points), whereas at each trapezoidal step the error decreases by 1/4th its size (while evaluating two times as many points). What I was trying to say is that the extended midpoint points of the open interval are not aligned to the extended trapezoidal points of the closed interval. So that everyone understands to avoid confusion. We can also illustrate this by running the code. The first 9 points for each method on a=0 to b=1 are: - Closed (extended trapezoidal): x=0 x=1 x=0.5 x=0.25 x=0.75 x=0.125 x=0.375 x=0.625 x=0.875 - Open (extended midpoint): x=0.5 x=0.166667 x=0.833333 x=0.0555556 x=0.277778 x=0.388889 x=0.611111 x=0.722222 x=0.944444 The same number of points, not aligned except the first and only one x=0.5. - Rob "I count on old friends to remain rational" |
|||
03-25-2021, 11:57 PM
Post: #14
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
Hi, robve: Note: In everything that follows, no offence is ever meant, it's just my trademark style when I feel skeptic (let's say) about some statement or other, in particular if not supported by facts. IMHO, of course. (03-24-2021 08:54 PM)robve Wrote: Adaptive Simpson is one of my favorite integration methods. Adaptive Simpson is usually much more efficient than composite Simpson's rule and Romberg methods, since it uses fewer function evaluations in places where the integrand is well-approximated by a cubic function. As you know, I don't share your appreciation for the methods you mention, at all. I respect that they're your favorites, but the original engineers at HP that developed the Romberg implementation for the HP-34C (later ported to many other models) weren't enthusiastic about it either but had to make do with that inferior method because the much better ones they'd rather include just didn't fit in the very meager available RAM. I quote from the August 1980 Hewlett-Packard Journal p.25-26 (my highlighting):
Quote:The HP-71B Math Pac uses the Romberg method. One advantage of Romberg over Adaptive Simpson is its error estimate [...] This error estimate can be reliably used, unless the integrand is much "too spiky", that is, has high order large nonzero derivatives, such in step functions and square and sawtooth wave functions that have non-differentiable points. Well, I'd take that statement with a pinch of salt. I've seen said "reliable error estimate" fail more than once. Quote:In practice, Adaptive Simpson is really, really good with only two minor drawbacks: [...] Adaptive Simpson uses closed intervals and may fail to integrate improper integrals. So it only works "really, really good" for nicely-behaved functions, well approximated on points by a cubic and with no problems at the end of the integration interval. Excuse me if I'm not impressed but many integrals appearing in real-life engineering and science aren't that nice, as said HP engineers were well aware of. I quote from the August 1980 Hewlett-Packard Journal p.29 and p.31-32 (my highlighting):
Quote:Adaptive Simpson will very likely not work, because it may subdivide its intervals until it hits the problematic points. See ?. To be honest, I can't fathom why would you describe a quadrature method with that kind of limitations as being "really, really good". Your statements aren't supported by the facts. Quote:Next I will compare Adaptive Simpson to two versions of Romberg, one for closed intervals (qromb) and one for open intervals (qromo). That is, computing the definite integral [...] for a variety of functions \( f(x) \) with different characteristics. I find your "variety of functions" quite ... erm ... bland, for lack of a better word. Just run-of-the-mill, unassuming integrals for the most part. For your benefit, at the end of this message I'm including a few sample integrals (7 in all) belonging to my own test suite, which I carefully selected to test my quadrature implementations (written in HP-71B BASIC) vs. the HP-71B Math Pac's Romberg-based INTEGRAL keyword (written in assembler), complete with results, evaluation counts, relative speeds and some comments. You might want to try them out with your own implementation of your favorite methods, to see how they fare. Quote:The Romberg qromb method tested performs at most N=23 subdivisions to evaluate the function up to 4 194 305 times. The Romberg qromo method tested performs at most N=15 subdivisions to evaluate the function up to 4 782 969 times. The Adaptive Simpson qasi method tested is recursive up to a depth of 21, which when evaluated in full performs 221+1+1 = 4 194 305 function evaluations. Really !? You really let your implementations evaluate the function millions of times ? The INTEGRAL keyword calls it quits upon reaching 65,535 evaluations and gives its best estimation because obviously something's not working. My quadrature quits at 16,384 evaluations and that occurs very very rarely. Over four million evaluations ? Is that limit user-selectable ? And to add insult to injury, I see that many of your examples actually do reach those limits often ! Enough. Now, as announced above, a few examples from my own suite, I1 to I7, for you to try them out with your implementations if you wish: / 1 I1 = | Cos(x) * Ln(x) .dx = -0.946083070367 / 0 >INTEGRAL(0,1,1E-8,COS(IVAR)*LN(IVAR)) -.9460830677 (~ 9 digits) using 16,383 evals and IBOUND = 9.46..E-9 My quadrature gives -.9460830703 (~ 10 digits) using 512 evals and is 14x faster / 1 I2 = | 1 / Sqrt(x) .dx = 2 / 0 >INTEGRAL(0,1,1E-10,1/SQR(IVAR)) 1.999983901 (~ 6 digits) using 65,535 evals, IBOUND is negative = -1.99997..E-10 My quadrature gives exactly 2 (12 digits) using 192 evals and is 75x faster. / 1 I3 = | Ln Gamma(x) .dx = .918938533205 / 0 >INTEGRAL(0,1,1E-9,LN(GAMMA(IVAR))) .9189385330 (~ 10 digits) using 65,535 evals and IBOUND = 9.19..E-10 (positive despite the 65,535 evals) My quadrature gives .9189385324 (~ 9 digits) using 512 evals and is 61x faster. / 1 I4 = | 1 / Sqrt(-Ln(x)) .dx = 1.77245385091 / 0 >INTEGRAL(0,1,1E-6,1/SQR(-LN(IVAR))) 1.7724378 (~ 6 digits) using 65,535 evals and IBOUND is negative: -1.77..E-6 My quadrature gives 1.7724542 (~ 8 digits) using 512 evals and is 15x faster. / Pi I5 = | Ln Sin(x) .dx = -2.1775860903 / 0 >INTEGRAL(0,PI,1E-9,LN(SIN(IVAR))) -2.177586089 (~ 10 digits) using 65,535 evals and IBOUND is negative: -2.18..E-9 My quadrature gives -2.177586089 (~ 10 digits) using 512 evals and is 60x faster. / 1 I6 = | x-.8 .dx = 5 / 0 >INTEGRAL(0,1,1E-8,IVAR^(-.8)) 4.94996212105 (~ 2 (!) digits) using 65,535 evals and IBOUND is negative = -4.94E-8 My quadrature gives 5.0000000005 (~ 10 digits) using 192 evals and is 91x faster. / 1 I7 = | x-.99 .dx = 100 / 0 >INTEGRAL(0,1,1E-7,IVAR^(-.99)) 20.410858766 ( 0 (!!) digits) using 65,535 evals and IBOUND is negative = -1.99..E-6 and the same utterly wrong value is also returned for 1E-8..1E-12 My quadrature gives 100.000002 (~ 9 digits) using 256 evals and is 75x faster. Regards. V. All My Articles & other Materials here: Valentin Albillo's HP Collection |
|||
03-26-2021, 02:05 AM
(This post was last modified: 03-26-2021 02:41 AM by robve.)
Post: #15
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 03:09 PM)Albert Chan Wrote: Trapezoids seems to be twice as good as mid-points (in the opposite direction). As this is indeed the case, I wonder how trapezoidal versus midpoint actually compare in Romberg. To come up with a ratio of the number of points for Romberg+trapezoidal versus Romberg+midpoints, we have to consider that at each midpoint step the error decreases by 1/9th in size (while evaluating three times as many points), whereas at each trapezoidal step the error decreases by 1/4th its size (while evaluating two times as many points). However, 3*1/9=1/3 versus 2*1/4=1/2 does not align with observations. Combining this with your 2 times number of points needed for midpoint gives 2*3*1/9 = 2/3 versus 1/2 per step. Interesting. Edit: I ran an experiment with 100 polynomials of degree 1 to 30 randomly generated. For eps=1e-9, the average number of points evaluated by method is: Romberg trapezoid: 46.68 Romberg midpoint: 186.3 Adaptive Simpson: 431.76 For eps=1e-5 and the same 100 polynomials: Romberg trapezoid: 19.32 Romberg midpoint: 64.8 Adaptive Simpson: 44 For eps=1e-3 and the same 100 polynomials: Romberg trapezoid: 11 Romberg midpoint: 25.92 Adaptive Simpson: 14.92 The ratio is 1:2.5 ~ 1:4 for Romberg closed versus open, depending on eps. As expected, Adaptive Simpson is fast for lower eps values, which makes this a favorite for quick estimates especially with complicated functions (these polynomials are bit too nice to be interesting to use Adaptive Simpson). - Rob "I count on old friends to remain rational" |
|||
03-26-2021, 07:12 AM
Post: #16
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 07:19 PM)robve Wrote: No, that approach has problems and is not in the right direction, though it could work but obviously not as well. Just for clarification... Since HP's implementation of the Romberg method did not evaluate the endpoints, I incorrectly jumped to the conclusion that midpoints were being used rather than trapezoids. However, as was indicated, the midpoint method would require trisections to reuse previous values, but the implementation is clearly bisecting which points back to trapezoids. What I was missing was the fact that the reason the endpoints don't need to be evaluated isn't because of the method but rather because of the substitution: integral from -1 to 1 of f(x) dx = integral from -1 to 1 of f( (1/2)*(3u - u^3) ) * (3/2)(1-u^2) du At the endpoints (1 - u^2) becomes zero, so there is no need to evaluate the function at the endpoints since the result will be multiplied by zero. (This assumes you have a proper integral.) For some improper integrals, such as integral from -1 to 1 of 1/sqrt(1-x^2) dx, the endpoints after substitution don't approach zero but are still finite, so using midpoints instead of trapezoids would produce much better results. I guess the simplicity of bisections outweighed the advantages of using midpoints. When you look at the memory restrictions of those early calculators, it's understandable that they would go with the simplest method possible. -wes |
|||
03-26-2021, 09:51 AM
Post: #17
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-24-2021 08:54 PM)robve Wrote: In practice, Adaptive Simpson is really, really good with only two minor drawbacks: I wonder if you considered an open interval version of Simpson's Rule. See Milne's Rule (Wikipedia's Open Newton–Cotes formulas). Or better yet, if you really want to stay with a 3 node method, how about the 3 node version of Gaussian Quadrature? It would be more accurate without taking any more computing effort. I love teaching Simpson's Rule to students, but... Quote:Where would any book on numerical analysis be without Mr. Simpson and his “rule”? The classical formulas for integrating a function whose value is known at equally spaced steps have a certain elegance about them, and they are redolent with historical association. Through them, the modern numerical analyst communes with the spirits of his or her predecessors back across the centuries, as far as the time of Newton, if not farther. Alas, times do change; with the exception of two of the most modest formulas, the classical formulas are almost entirely useless. They are museum pieces, but beautiful ones. -wes |
|||
03-26-2021, 01:21 PM
Post: #18
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-26-2021 09:51 AM)Wes Loewer Wrote: I love teaching Simpson's Rule to students, but... Thanks for sharing the beautiful prose in Ch.4.1 on classical formulas. One important reason to consider Romberg and Adaptive Simpson these days, as well as all of the classical formulas, is the inherent elegance and beauty of these methods. Likewise, to appreciate vintage pocket machines (HP, SHARP, etc) with their limited hardware is a motivation to try to let them perform more powerful tasks than what they were designed for So the focus is on (elegant when possible) pedagogical code examples in BASIC with C versions for reference. I made some adjustments to the initial post to address reasonable questions and concerns when something was not clear. I misunderstood the HP-71B implementation, so that part is removed. The post is not specifically about the HP-71B and other posts already discuss that topic. - Rob "I count on old friends to remain rational" |
|||
03-26-2021, 04:30 PM
Post: #19
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-25-2021 11:57 PM)Valentin Albillo Wrote: Now, as announced above, a few examples from my own suite, I1 to I7, for you to try ... All integrals is hitting the assumption that end-points does not matter. After u-transformation, integrand end-points mean (t1) is still not zero. First 5 can be fixed with *another* round of u-transfomation. ∫(f(x), x=0..1) = ∫(6*u*(1-u)*f(u*u*(3-2*u)), u=0..1) > 10 DEF FNU(U) @ N=N+1 @ FNU=6*U*(1-U)*FNF(U*U*(3-2*U)) @ ENDDEF > 20 N=0 @ DISP INTEGRAL(0,1,P,FNU(IVAR)),N > 1 DEF FNF(X)=COS(X)*LN(X) @ REM I1 = -Si(1) > RADIANS @ P=1E-8 @ RUN -.946083070248 511 > 1 DEF FNF(X)=1/SQRT(X) @ REM I2 = 1/0.5 = 2 > P=1E-10 @ RUN 2 127 > 1 DEF FNF(X)=LN(GAMMA(X)) @ REM I3 = ln(2*pi)/2 >P=1E-9 @ RUN .918938533197 1023 >1 DEF FNF(X)=1/SQRT(-LN(X)) @ REM I4 = sqrt(pi) >P=1E-6 @ RUN 1.77245405564 31 >1 DEF FNF(X)=LN(SIN(X)) @ REM I5 = re(1j/2*Li2(exp(2j)))-ln(2) >P=1E-9 @ RUN -1.05672020598 1023 Last 2 still failed with extra round of u-transformation ... >1 DEF FNF(X)=X^(-0.8) @ REM I6 = 1/0.2 = 5 >P=1E-8 @ RUN 4.99943927847 65535 >1 DEF FNF(X)=X^(-0.99) @ REM I7 = 1/0.01 = 100 >P=1E-7 @ RUN 35.9675524685 65535 |
|||
03-26-2021, 04:40 PM
Post: #20
|
|||
|
|||
RE: Adaptive Simpson and Romberg integration methods
(03-26-2021 04:30 PM)Albert Chan Wrote: All integrals is hitting the assumption that end-points does not matter. Albert, darn, you just beat me, as I was testing transformed integrals too! But I ran into issues doing this in C with the endpoints, you know. I noticed that all of the integrals in his post have two distinguishing features: they have a U shape (or inverted) on both or either end that makes them suitable to apply a transform before numerical integration, i.e. change of variable or u-substitution. The trick to pull this off in an implementation is basically to have an initial check of a few points to find evidence of a U on either end and then transform. In that way we can use a simple quadrature e.g. Simpson's, Bode's, or more advanced Newton-Cotes formulas to obtain a good approximation with fewer points and a usable error estimate. - Rob "I count on old friends to remain rational" |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 9 Guest(s)