Post Reply 
The tanh-sinh quadrature
01-31-2017, 03:19 PM (This post was last modified: 01-21-2018 11:46 AM by emece67.)
Post: #1
The tanh-sinh quadrature
Hi all,

This post is aimed to those most "mathematical" of us. On it I roughly describe a numerical integration method I was just aware of, and that I think is very promising to solve integrals in programmable calculators.

Since the high school days I am fascinated with the quadrature methods. For me it is a kind of magic that, given some integral:
\[I = \int_a^bf(x)dx\]
you can approximate it with:
\[I \approx \sum_iw_if(x_i)\]
How's that than you can condense all the "information" carried by function \(f(x)\) over interval \((a, b)\) with a discrete and finite sum? How's that some set of weight and abscissa pairs \(\{w_i, x_i\}\) can gave a much better approximation than another set?

Still remember me manually (with the help of a Casio fx-180p) solving the system of non-linear equations defining the abscissas for the Chebyshev formula just to get such abscissas with more digits than those in my, then, textbook (Piskunov).

Well, back to the present, a few days ago I was aware of the existence of another quadrature method, the so called tanh-sinh quadrature, that seemed quite elegant to me. Some trials convinced me that such method is also precise, fast, robust &, surely, programmable in the little calculating machines we love. As this method (proposed on 1974) seems not to be described in many books, but only on some technical papers, I want to describe it here hoping some of you find it as interesting as I did (provided you still do not know it!)

Let's go, suppose the some integral needs to be computed:
\[I = \int_a^bf(x)dx\]
After the usual variable change \(x={b+a\over2}+{b-a\over2}r\) you get:
\[I = {b-a\over2}\int_{-1}^1f\left({b+a\over2}+{b-a\over2}r\right)dr\]
Now, the next variable change is performed: \(r=g(t)=\tanh\left({\pi\over2}\sinh t\right)\). Thus:
\[I={b-a\over2}\int_{-\infty}^\infty f\left({b+a\over2}+{b-a\over2}g(t)\right)g'(t)dt\]
Now, we compute this last integral as an infinite sum of rectangles, each of width \(h\):
\[I\approx{b+a\over2}h\sum_{i=-\infty}^\infty f\left({b+a\over2}+{b-a\over2}g(hi)\right)g'(hi)={b+a\over2}h\sum_{i=-\infty}^\infty f\left({b+a\over2}+{b-a\over2}r_i\right)w_i\]
with the abscissas \(r_i = \tanh\left({\pi\over2}\sinh(hi)\right)\) and the weights \(w_i = {\pi\over2}{\cosh(hi)\over\cosh^2\left({\pi\over2}\sinh(hi)\right)}\)

But, as \(i\) departs from 0, the weights \(w_i\) behave as:
\[w_i\rightarrow{\pi\over e^{{\pi\over2}e^{|hi|}-|hi|}}\]
that vanishes really fast because of the double exponential on the denominator. Thus, the infinite summation above can be approximated by the finite one:
\[I\approx{b+a\over2}h\sum_{i=-N}^N f\left({b+a\over2}+{b-a\over2}r_i\right)w_i\]
where \(N\) is (hopefully) a small integer.

To implement this algorithm when working with \(d\) decimal digits, you start with \(h = 1\) (the rectangles in the \(t\) domain are of width 1). Then compute the required \(N\) as the largest integer such that:
  1. \(w_N\) is below \(\epsilon=10^{-d}\)
  2. and \(r_N\), expressed with \(d\) decimal digits, is strictly below 1
  3. and \({b + a\over2}+{b - a\over2}r_N\), expressed with \(d\) decimal digits, is strictly below \(b\)

These last two requirements ensure that the function \(f\) is not evaluated at the ends of the interval \((a, b)\). As \(hi\) increases, \(w_i\) rapidly vanishes to 0, but the abscissas \(r_i\) do also rapidly approach the interval ends \((-1, 1)\). Not evaluating \(f\) at the ends allows this method to manage integrals where the integrand has discontinuities (or infinite derivatives) at the ends (well, only if the double exponential decay of the weights "overweights" the integrand).
Of the three previous conditions, I have found that the first is slightly less strict than the others, thus from the others, \(N\) can be computed by:
\[N = \lfloor{1\over h}\cosh^{-1}\left({1\over \pi}\ln\left(2\cdot10^d\min\left(1, {b - a\over2}\right)\right)\right)\rfloor\]
With this value of \(N\) the summation is performed and you get a first approximation for \(I\).

Now you repeat this process, but this time halving the rectangle width \(h\). You will halve such width more times, so we define the "level" of the algorithm (or the iteration) as \(k\) and let \(h=2^{-k}\) (the initial \(k\) is 0). At each level you compute an approximation \(I_k\) for \(I\). When computing \(I_k\) you only need to compute half the abscissas and weights, the odd ones, as the even ones where included in the summation for \(I_{k-1}\), So \(I_k\) is \(I_{k-1}\) plus the sum for the new, odd, abscissas and weights. Also, the weights are even functions, so \(w_{-i}=w_i\).

The algorithm stops when the relative difference between \(I_k\) and \(I_{k-1}\) is below \(10^{-d\over2}\). The algorithm doubles the number of correct digits at each level.

In my tests, when working with 16 decimal digits, the algorithm stops at level \(k=3\) having performed 51 function evaluations. At 32 digits it stops at level 4 with 123 function evaluations. The result is usually correct to 15 or 31 digits, but sometimes (in my tests, this happens sometimes when the integrand goes to infinite at one of the interval ends) it gives half the number of correct digits.

There is an obvious drawback in this algorithm: the number of hyperbolic functions that must be computed to get the abscissas and weights. It amounts to twice the number of function evaluations (102 hyperbolics in my 16 digits tests). I have not compared yet the overall performance of this method against others (specifically, against the Romberg method).

I firstly heard of this algorithm when reading the documentation of the Python package mpmath, that I was using to perform comparisons between different root solvers at different digit counts.

There is below some Python code you can reuse to play a bit with this method. As you can see, the final code is compact & straightforward, so I think it can be translated easily to many programmable calculators (the wp34s is my target).

Further reading:
http://crd-legacy.lbl.gov/~dhbailey/dhbp...h-sinh.pdf

http://www.davidhbailey.com/dhbpapers/quadrature-em.pdf

Borwein, Bailey & Girgensohn, “Experimentation in Mathematics - Computational Paths to Discovery”, A K Peters, 2003, pages 312-313.

H. Takahasi and M. Mori. "Double Exponential Formulas for Numerical Integration." Publications of RIMS, Kyoto University 9 (1974), 721–741. (Available online)

Regards.

Code:

from mpmath import *
import time

# number of digits
mp.dps = 16
# repeat this # of times (to get better time estimations)
nloops = 100

# test equations
# equation =  0 => x*log(1 + x); 0; 1; 1/4
#             1 => x**2*atan(x); 0; 1; (pi - 2 + 2ln2)/12
#             2 => exp(x)cos(x); 0; pi/2; (exp(pi/2) - 1)/2
#             3 => atan(sqrt(2 + x**2))/(1 + x**2)/sqrt(2 + x**2); 0; 1; 5pi**2/96
#             4 => sqrt(x)ln(x); 0; 1; -4/9
#             5 => sqrt(1 - x**2); 0; 1; pi/4
#             6 => sqrt(x)/sqrt(1 - x**2); 0; 1; 2sqrt(pi)Gamma(3/4)/Gamma(1/4)
#             7 => ln(x)**2; 0; 1; 2
#             8 => ln(cos(x)); 0; pi/2; -pi*ln(2)/2
#             9 => sqrt(tan(x)); 0; pi/2; pi*sqrt(2)/2

print

# try all equations
for equation in range(10):
  if equation == 0:
    # limits
    a = mpf('0')
    b = mpf('1')
    # function
    def f(x):
      return x*ln(1 + x)
    # expected result (not used during computation)
    s = mpf('1/4')
  if equation == 1:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return x**2*atan(x)
    s = (pi() - 2 + 2*ln(mpf('2')))/12
  if equation == 2:
    a = mpf('0')
    b = pi()/2
    def f(x):
      return cos(x)*exp(x)
    s = (exp(pi()/2) - 1)/2
  if equation == 3:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return atan(sqrt(2 + x**2))/(1 + x**2)/sqrt(2 + x**2)
    s = 5*pi()**2/96
  if equation == 4:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return sqrt(x)*ln(x)
    s = mpf('-4/9')
  if equation == 5:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return sqrt(1 - x**2)
    s = pi()/4
  if equation == 6:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return sqrt(x)/sqrt(1 - x**2)
    s = 2*sqrt(pi())*gamma(mpf('3/4'))/gamma(mpf('1/4'))
  if equation == 7:
    a = mpf('0')
    b = mpf('1')
    def f(x):
      return ln(x)**2
    s = mpf('2')
  if equation == 8:
    a = mpf('0')
    b = pi()/2
    def f(x):
      return ln(cos(x))
    s = -pi()*ln(mpf('2'))/2
  if equation == 9:
    a = mpf('0')
    b = pi()/2
    def f(x):
      return sqrt(tan(x))
    s = pi()*sqrt(mpf('2'))/2

  # to measure algorithm execution time
  tt0 = time.time()
  # repeat nloops times to get better time estimations
  for m in range(nloops):
    # counters
    tnfe = 0  # counts function evaluations
    hyp = 0   # counts hyperbolics

    # we need a < b
    a, b = (a, b) if b > a else (b, a)
    # x = bpa2 + bma2*r
    bpa2 = (b + a)/2
    bma2 = (b - a)/2

    # epsilon
    eps = mpf('10')**(-mp.dps)
    # convergence threshold
    thr = mpf('10')**(-mp.dps/2)

    pi2 = pi()/2

    # (approx) maximum t that yields
    # the maximum representable r & x
    # values strictly below the upper
    # limits +1 (for r) and b (for x)
    tm = asinh(ln(2*min(1, bma2)/eps)/(2*pi2))
    hyp += 2

    # level
    k = 0
    # maximum allowed level
    maxlevel = int(ceil(log(mp.dps, 2))) + 2
    # ss is the final result
    # 1st addend at order 0
    ss = f(bpa2)
    tnfe += 1
    # "initial" previous computed result, used
    # in the convergence test
    sshp = 2*ss + 1
    # progress thru levels
    while k <= maxlevel:
      h = mpf('2')**-k
      N = int(floor(tm/h))
      j = 1
      while j <= N:
        t = j*h
        csh = pi2*sinh(t)
        ch = cosh(t)
        w = ch / cosh(csh)**2
        r = tanh(csh)
        hyp += 4
        fp = f(bpa2 + bma2*r)
        fm = f(bpa2 - bma2*r)
        p = w*(fp + fm)
        tnfe += 2
        ss += p
        # at level 0 must sweep all j values,
        # at other levels only the odd ones
        j += 2 if k > 0 else 1
      # converged?
      if abs(2*sshp/ss - 1) < thr: break
      # no, advance to next level
      k += 1
      # store the just computed approximation
      # for the next convergence test
      sshp = ss
    # apply constant coefficients
    ss *= bma2*pi2*h
    # done, measure time
    tt1 = time.time()

  # print results
  print equation, tnfe, hyp, k, (tt1 - tt0)/nloops, ss, ss - s, ss/s - 1
Find all posts by this user
Quote this message in a reply
01-31-2017, 10:03 PM
Post: #2
RE: The tanh-sinh quadrature
I have compared this tanh-sinh method against the Romberg method used in many hp calculators, by means of another Python script that implements the Romberg method.

To get similar errors, the Romberg method is faster when working with less than 12-14 decimal digits. At 10 digits, Romberg can be, say, 2 times faster.

But when working with a higher number of digits, the tanh-sinh method is way faster. At 16 digits is slightly faster, at 32 is 3x faster and with even more digits it is faster by orders of magnitude. It needs much less function evaluations to get similar results.

As a side effect, the tanh-sinh method copes flawlessly with many integrals having infinite derivatives at the integration interval ends. In such cases the Romberg method converges very slowly and gives much greater errors. Even more, the tanh-sinh method also manages many cases of integrals having infinite discontinuities at the integration interval ends, the Romberg method fails in such cases.

I think it is a really promising quadrature method for machines like the wp34s, that work with 16/32 digits.

Regards.
Find all posts by this user
Quote this message in a reply
02-01-2017, 04:13 PM
Post: #3
RE: The tanh-sinh quadrature
Looks like algorithms are "happening" in Madrid ... I need to put it om my list of cities to visit maybe in 2018?????

I will give your post a major reading ... thank you ... thank you ... thank you!

Namir
Find all posts by this user
Quote this message in a reply
02-05-2017, 02:23 AM (This post was last modified: 02-05-2017 02:24 AM by Namir.)
Post: #4
RE: The tanh-sinh quadrature
Found Excel VBA code for the tanh-sinh quadrature written by an Australian programmer, Graeme Dennes, who collaborated with me to enhance Romberg's method (files are on my web site (click here). I will look at the VBA code which seems to be long.

Namir
Find all posts by this user
Quote this message in a reply
02-05-2017, 09:25 AM (This post was last modified: 02-05-2017 09:35 AM by emece67.)
Post: #5
RE: The tanh-sinh quadrature
Thanks Namir. I was aware of such spreadsheet (I saw a reference to it in a blog, but the link was broken). I was finally able to download it today.

According to such spreadsheet, this tanh-sinh method is the fastest, by far, to those compared (Romberg and Gauss-Kronrod among them).

The author states that: "The speed and the accuracy of the Tanh-Sinh method in particular and the Double-Exponential (DE) method in general are simply astounding!".

Regards.

Edit: this spreadsheet lists more than 400 definite integrals with their values, a really good reference indeed!
Find all posts by this user
Quote this message in a reply
02-05-2017, 01:35 PM
Post: #6
RE: The tanh-sinh quadrature
Cesar,

I am "tickled pink" to read your comments about the VBA code for tanh-sinh quadrature. Will be studying it this week.

Namir
Find all posts by this user
Quote this message in a reply
02-05-2017, 05:25 PM (This post was last modified: 02-06-2017 08:27 AM by Pekis.)
Post: #7
RE: The tanh-sinh quadrature
(02-05-2017 01:35 PM)Namir Wrote:  Cesar,

I am "tickled pink" to read your comments about the VBA code for tanh-sinh quadrature. Will be studying it this week.

Namir

Hello, how does it also compare to the Simpson Revival algorithm (described in a recent thread in this site) ?

Thanks
Find all posts by this user
Quote this message in a reply
02-07-2017, 09:26 PM
Post: #8
RE: The tanh-sinh quadrature
Cesar,

What does the following statement in Oython do?

j += 2 if k > 0 else 1

Namir
Find all posts by this user
Quote this message in a reply
02-07-2017, 10:59 PM (This post was last modified: 02-07-2017 11:09 PM by emece67.)
Post: #9
RE: The tanh-sinh quadrature
(02-07-2017 09:26 PM)Namir Wrote:  Cesar,

What does the following statement in Oython do?

j += 2 if k > 0 else 1

Namir

if (k > 0) then
j = j + 2;
else
j = j + 1;

On the first level, one must cover all j values (j = j + 1). On the next levels only the odd ones (j = j + 2).

Regards.
Find all posts by this user
Quote this message in a reply
02-07-2017, 11:18 PM
Post: #10
RE: The tanh-sinh quadrature
Thank you Cesar. Just for fun, I am trying to convert your Python code to Excel VBA.

Namir
Find all posts by this user
Quote this message in a reply
02-07-2017, 11:48 PM
Post: #11
RE: The tanh-sinh quadrature
The translation of your Python code to Excel VBA works .. and is very accurate!!

:-)

Namir
Find all posts by this user
Quote this message in a reply
02-08-2017, 02:19 PM (This post was last modified: 02-08-2017 10:23 PM by Namir.)
Post: #12
RE: The tanh-sinh quadrature
The following Excel VBA code is based on emece67's Python code.

The VBA code writes the tags for column A. The code uses columns B and up to solve one or more numerical integration problems. The input rows in column B and up contain the following data:

1. Row 1 contains the value of A, the lower integration limit.
2. Row 2 contains the value of B, the upper integration limit.
3. Row 3 contains text that describes the function to be integrated. For example the input can be "x*ln(1 + x)
". The input is case insensitive.
4. Row 4 contains the value of the exact integral. You can enter either a constant or a calculated value, such as "=LN(10)" or "=LN(B2/B1)".

Note: Make sure that math functions appearing in row 3 are properly interpreted by VBA's runtime system. You make need to change the function's name from, say, LN to LOG to avoid runtime error.

The output rows contain the following data:

1. Row 5 contains the calculated integral.
2. Row 6 contains the error in the integral.
3. Row 7 contains the percent error in the integral.
4. Row 8 contains the number of function calls.
5. Row 9 contains the number of "hyperbolic count" which is (usually?) twice the number of function call.

Here is the VBA code.

Code:
Option Explicit

' This program is based on the Python code shared by "Cesar" who goes by
' the user name emece67 on the hpmususem.com web site.

Function Fx(ByVal sFx As String, ByVal X As Double) As Double
  sFx = Replace(sFx, "EXP(", "!")
  sFx = Replace(sFx, "X", "(" & X & ")")
  sFx = Replace(sFx, "!", "EXP(")
  Fx = Evaluate(sFx)
End Function

Function Min(X, Y)
  Min = WorksheetFunction.Min(X, Y)
End Function

Function Floor(ByVal X As Double) As Double
  Floor = WorksheetFunction.Floor_Math(X)
End Function

Function Ceil(ByVal X As Double) As Double
  Ceil = WorksheetFunction.Ceiling_Math(X)
End Function

Function Sinh(ByVal X As Double) As Double
  Sinh = WorksheetFunction.Sinh(X)
End Function

Function Cosh(ByVal X As Double) As Double
  Cosh = WorksheetFunction.Cosh(X)
End Function

Function ASinh(ByVal X As Double) As Double
  ASinh = WorksheetFunction.ASinh(X)
End Function

Function Tanh(ByVal X As Double) As Double
  Tanh = WorksheetFunction.Tanh(X)
End Function

Sub WriteToColumnA()
' set up tags of columns A
  [A1].Value = "A"
  [A2].Value = "B"
  [A3].Value = "F(x)"
  [A4].Value = "Exact Result"
  [A5].Value = "Result"
  [A6].Value = "Err"
  [A7].Value = "%Err"
  [A8].Value = "Fx Calls"
  [A9].value = "Hyperbolics Count"
End Sub

Function TanhSinhQuadrature(ByVal sFx As String, ByVal A As Double, ByVal B As Double, _
              ByRef numFxCalls As Long, ByRef hyperbolicsCount As Long) As Double
  Dim K As Long, J As Long, Col As Integer
  Dim BuffAB As Double
  Dim ResultExact As Double, BmA2 As Double, BpA2 As Double, eps As Double
  Dim convergenceThreshold As Double, halfPi As Double, tMax As Double, h As Double
  Dim dps As Byte, maxLevel As Long, N As Long
  Dim ch As Double, csh As Double, w As Double, r As Double, fp As Double, fm As Double
  Dim p As Double, resultFinal As Double, resultPrevious As Double, t As Double
 
 
  If A > B Then
    BuffAB = A
    A = B
    B = BuffAB
  End If
  
  numFxCalls = 0
  hyperbolicsCount = 0
  dps = 16
  BpA2 = (B + A) / 2
  BmA2 = (B - A) / 2
  eps = 1 / 10 ^ dps
  ' convergence threshold
  convergenceThreshold = 1 / 10 ^ (dps / 2)
  halfPi = 2 * Atn(1)
  ' (approx) maximum t that yields
  ' the maximum representable r & x
  ' values strictly below the upper
  ' limits +1 (for r) and b (for x)
  tMax = ASinh(Log(2 * Min(1, BmA2) / eps) / (2 * halfPi))
  hyperbolicsCount = hyperbolicsCount + 2
  ' level
  K = 0
  ' maximum allowed level
  maxLevel = Int(Ceil(Log(dps) / Log(2))) + 2
  ' resultFinal is the final result
  ' 1st addend at order 0
  resultFinal = Fx(sFx, BpA2)
  numFxCalls = numFxCalls + 1
  ' "initial" previous computed result, used
  ' in the convergence test
  resultPrevious = 2 * resultFinal + 1
  ' progress thru levels
  Do While K <= maxLevel
    DoEvents
    h = 1 / 2 ^ K
    N = Int(Floor(tMax / h))
    J = 1
    Do While J <= N
      DoEvents
      t = J * h
      csh = halfPi * Sinh(t)
      ch = Cosh(t)
      w = ch / Cosh(csh) ^ 2
      r = Tanh(csh)
      hyperbolicsCount = hyperbolicsCount + 4
      fp = Fx(sFx, BpA2 + BmA2 * r)
      fm = Fx(sFx, BpA2 - BmA2 * r)
      p = w * (fp + fm)
      numFxCalls = numFxCalls + 2
      resultFinal = resultFinal + p
      ' at level 0 must sweep all j values,
      ' at other levels only the odd ones
      If K > 0 Then J = J + 2 Else J = J + 1 ' j += 2 if k > 0 else 1
    Loop
    ' converged?
    If Abs(2 * resultPrevious / resultFinal - 1) < convergenceThreshold Then Exit Do
    ' no, advance to next level
    K = K + 1
    ' store the just computed approximation
    ' for the next convergence test
    resultPrevious = resultFinal
  Loop
  ' apply constant coefficients
  resultFinal = BmA2 * halfPi * h * resultFinal
  TanhSinhQuadrature = resultFinal
End Function

Sub goColumns()
  Dim Col As Integer
  Dim A As Double, B As Double, sFx As String
  Dim ResultExact As Double
  Dim numFxCalls As Long, hyperbolicsCount As Long
  Dim resultFinal As Double
  
  ' you can comment the nest statement after you run the code once
  Call WriteToColumnA
  
  On Error GoTo HandleErr
  Col = 2
  Do Until IsEmpty(Cells(2, Col))
    A = Cells(1, Col)
    B = Cells(2, Col)
    sFx = UCase(Cells(3, Col))
    sFx = Replace(sFx, " ", "")
    ResultExact = Cells(4, Col)
    
    resultFinal = TanhSinhQuadrature(sFx, A, B, numFxCalls, hyperbolicsCount)

    Cells(5, Col) = resultFinal
    Cells(6, Col) = ResultExact - resultFinal
    Cells(7, Col) = 100 * Cells(6, Col) / ResultExact
    Cells(8, Col) = numFxCalls
    Cells(9, Col) = hyperbolicsCount
GoHere:
    Col = Col + 1
  Loop
    
ExitProc:
  Exit Sub
  
HandleErr:
  ' uncomment nest statement and set as breakpoint when you want to debug
  ' Resume Next
  Resume GoHere
    
End Sub

The above code should be a good step into generating HP Prime code for the tanh-sinh quadrature.
Find all posts by this user
Quote this message in a reply
02-08-2017, 02:59 PM
Post: #13
RE: The tanh-sinh quadrature
(02-07-2017 11:48 PM)Namir Wrote:  The translation of your Python code to Excel VBA works .. and is very accurate!!

Thanks Namir. Will you add such VBA code to the Excel from Graeme Dennes to compare it against different Romberg methods?

Unfortunately, I'm still an Excel 101 student, and such things as VBA scripts inside Excel books are alien to me.

As a side, with the Python mpmath package (an arbitrary precision package) you specify the number of decimal digits you want to use by means of the dps value, so you can adjust it to any value you want. I assume that VBA Doubles are always 64 bit floats, so dps must not be changed in your code.

Thanks again & regards.
Find all posts by this user
Quote this message in a reply
02-08-2017, 03:20 PM
Post: #14
RE: The tanh-sinh quadrature
(02-08-2017 02:59 PM)emece67 Wrote:  
(02-07-2017 11:48 PM)Namir Wrote:  The translation of your Python code to Excel VBA works .. and is very accurate!!

Thanks Namir. Will you add such VBA code to the Excel from Graeme Dennes to compare it against different Romberg methods?

Unfortunately, I'm still an Excel 101 student, and such things as VBA scripts inside Excel books are alien to me.

As a side, with the Python mpmath package (an arbitrary precision package) you specify the number of decimal digits you want to use by means of the dps value, so you can adjust it to any value you want. I assume that VBA Doubles are always 64 bit floats, so dps must not be changed in your code.

Thanks again & regards.

Excel VBA has 64-bit floats and does not support extended precision like Python (and Matlab?)

Namir
Find all posts by this user
Quote this message in a reply
02-08-2017, 10:25 PM
Post: #15
RE: The tanh-sinh quadrature
(02-08-2017 02:59 PM)emece67 Wrote:  
(02-07-2017 11:48 PM)Namir Wrote:  The translation of your Python code to Excel VBA works .. and is very accurate!!

Thanks Namir. Will you add such VBA code to the Excel from Graeme Dennes to compare it against different Romberg methods?

Unfortunately, I'm still an Excel 101 student, and such things as VBA scripts inside Excel books are alien to me.

As a side, with the Python mpmath package (an arbitrary precision package) you specify the number of decimal digits you want to use by means of the dps value, so you can adjust it to any value you want. I assume that VBA Doubles are always 64 bit floats, so dps must not be changed in your code.

Thanks again & regards.

I can tell you that the Excel VBA code by Graeme Dennes is better than mine, since it is most likely better optimized and includes computational tricks. I did a straightforward translation of your Python code.

Namir
Find all posts by this user
Quote this message in a reply
03-08-2017, 01:52 AM
Post: #16
RE: The tanh-sinh quadrature
Hi again.

I finally found the time to write an integration program for the wp34s that uses this tanh-sinh algorithm. I checked its behaviour against that of the built-in integrator (a Romberg based one). It is even better than I expected. When working in SCI2/ENG2 both algorithms are equally fast, but when working with more significant digits the tanh-sinh one gets much faster, sometimes even 10+x faster.

As a bonus, it behaves much better in such cases when the integrand goes to infinity at the interval ends (being it still integrable, of course) or when the derivative goes to infinity at the ends.

On the other side, given the way this algorithm works: using much less sample points and concentrating them at the interval edges, some functions are difficult for it, namely:
  • oscillatory functions (say, integrate cos(x) over 4.25 periods)
  • functions that go to zero very fast when departing from the middle (think on a narrow bell-shaped function)
Usually the Romberg method gets better results for these kinds of functions, but taking a longer time to get the computation, though.

In any case, it seems to me that this method is, no doubt, overall better than the usual Romberg one, so it will replace the built-in integration program on my wp34s machines soon.

In a few days I'll post the program listing (it is still in a primitive shape, not adequate to be seen by educated people), if someone is interested on it.

Regards.
Find all posts by this user
Quote this message in a reply
03-08-2017, 02:42 AM
Post: #17
RE: The tanh-sinh quadrature
This method looks interesting but I'm not convinced it should be part of the standard 34S firmware.

I originally used a Gauss-Kronrod quadrature in the 34S. As with all quadrature methods, it didn't always work and it wasn't adaptive but it was pretty fast. Enough people pointed out specific failure cases and wanted Romberg that I changed the method over. Rapidly oscillating functions and some exponential were bad from memory.

Still, publish the code please,

- Pauli
Find all posts by this user
Quote this message in a reply
03-08-2017, 09:19 AM
Post: #18
RE: The tanh-sinh quadrature
(03-08-2017 02:42 AM)Paul Dale Wrote:  This method looks interesting but I'm not convinced it should be part of the standard 34S firmware.

I originally used a Gauss-Kronrod quadrature in the 34S. As with all quadrature methods, it didn't always work and it wasn't adaptive but it was pretty fast. Enough people pointed out specific failure cases and wanted Romberg that I changed the method over. Rapidly oscillating functions and some exponential were bad from memory.

Still, publish the code please,

- Pauli

In fact this is an adaptive quadrature. The number of abscissas is doubled on each iteration and the method stops when the computed integral changes by less than the needed precision. The difference with the previous Gauss-Kronrod method on the wp34s is, I think, that here the abscissas and weights are computed on the fly, so you are not constrained to a given number of precomputed samples.

In any case, my implementation of this method for the wp34s does stop the algorithm when the needed precision has been reached or after a number of functions evaluations has been reached. Relaxing this criteria will give better results for such problematic functions. I'll test it.

Regards.
Find all posts by this user
Quote this message in a reply
03-08-2017, 06:18 PM
Post: #19
RE: The tanh-sinh quadrature
I've just posted the code in a separate thread.

Regards.
Find all posts by this user
Quote this message in a reply
03-08-2017, 11:56 PM
Post: #20
RE: The tanh-sinh quadrature
.
Hi, Paul:

(03-08-2017 02:42 AM)Paul Dale Wrote:  I originally used a Gauss-Kronrod quadrature in the 34S. As with all quadrature methods, it didn't always work and it wasn't adaptive but it was pretty fast. Enough people pointed out specific failure cases and wanted Romberg that I changed the method over. Rapidly oscillating functions and some exponential were bad from memory.

I have my own quadrature implementation and would be very obligued if you could post (or include working links to) the specific failure cases you mention as well as any other cases you tested your quadrature against.

In particular I'd like to know:

- the actual integral: function, limits of integration, accuracy desired
- the result you got: actual result vs. exact result
- the timing: actual min.secs (as compared to the time it takes your machine to invert a 10x10 random matrix of (0..1) 12-digit reals) and/or total function evaluations:

Example (a far too easy one):

Integral(0,1,sin(x^2) dx, 1E-12) -> 0.310268246746, exact=0.310268(whatever)
Timing: 1.2 seconds. (on a machine that takes 0.57 seconds to invert said matrix), 257 function evaluations in all.

Thanks and best regards.
V.
.

  
All My Articles & other Materials here:  Valentin Albillo's HP Collection
 
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 5 Guest(s)