# HP Forums

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
If you are familiar with the basic Simpson's rule, you know that it integrates f(x) in an interval (a, b). This algorithm calculates f(x) at a, b, and (a+b)/2. In other words, the Simpson's rule samples f(x) at midpoint of (a, b). Typically, we use the chained version of Simpson's rule, where we use multiple equidistant values of x to calculate the integral.

I decided to develop a similar algorithm that calculates the integral of f(x) for the interval (a, b). Instead of calculating f((a + b)/2), I decided to determine the value of x=m where f(m) = (f(a) + f(b)) / 2. In other words, I seek the value of x where f(x) is the average of f(a) and f(b). The first step is to perform a simple inverse Lagrangian interpolation to calculate m. Then perform an analytical integration for the Lagrangian polynomial that passes through (a, f(a), (m, f(m), and (b, f(b)).

Here is the pseudo-code for the algorithm:

Quote:Given a, b, and function f(x).
To calculate the Area of f(x) between a and b.

c = (a + b) / 2

let f(m) = (f(a) + f(b)) / 2

m = a * [(f(m) - f(c))(f(m) - f(b))]/[(f(a) - f(c))(f(a) - f(b))] +
c * [(f(m) - f(a))(f(m) - f(b))]/[(f(c) - f(a))(f(c) - f(b))] +
b * [(f(m) - f(a))(f(m) - f(b))]/[(f(b) - f(a))(f(b) - f(c))]

P(x) = f(a)*(x-m)*(x-b)/[(a-m)*(a-b)] +
f(m)*(x-a)*(x-b)/[(m-a)*(m-b)] +
f(b)*(x-a)*(x-m)/[(b-a)*(b-m)]

= Ca*(x^2-(m+b)x + m*b) +
Cm*(x^2-(a+b)x + a*b) +
Cb*(x^2-(m+a)x + m*a)

= (Ca + Cm + Cb)*x^2 -
(Ca*(m+b) + Cm*(a+b) + Cb*(m+a))*x +
(Ca*m*b + Cm*a*b + Cb*m*a)

= K1*x^2 - K2*x + K3

Area = K1*X^3/3 - K2*x^2/2 + K3*x ] from a to b
= K1*(b^3-a^3)/3 - K2*(b^2-a^2)/2 + K3*(b-a)
=[b-a]*[K1/3*(b^2+ ab + a^2) -
K2/2 * (b + a) + K3]

Where,

K1 = (Ca + Cb + Cb)
K2 = (Ca*(m+b) + Cm*(a+b) + Cb*(m+a))
K3 = (Ca*mb + Cm*ab + Cb*ma)

Ca = f(a)/[(a-m)(a-b)]
Cm = f(m)/[(m-a)(m-b)]
Cb = f(b)/[(b-a)(b-m)]

Here is an implementation of the function Area in Excel VBA:

Quote:Function f(ByVal X As Double) As Double
' Sample function. Edit the code to customize for other mathematical functions
f = 1 / X
End Function

Function Area(ByVal A As Double, ByVal B As Double, ByVal N As Integer)
Dim Sum As Double
Dim Blast As Double, Fa As Double, Fb As Double, Fc As Double, Fm As Double
Dim Ca As Double, Cb As Double, Cm As Double
Dim K1 As Double, K2 As Double, K3 As Double
Dim Incr As Double, m As Double, C As Double

Blast = B
Incr = (B - A) / N
Sum = 0
Do
B = A + Incr
C = (A + B) / 2
Fa = f(A)
Fb = f(B)
Fc = f(C)
Fm = (Fa + Fb) / 2

m = A * ((Fm - Fc) * (Fm - Fb)) / ((Fa - Fc) * (Fa - Fb)) + _
C * ((Fm - Fa) * (Fm - Fb)) / ((Fc - Fa) * (Fc - Fb)) + _
B * ((Fm - Fa) * (Fm - Fc)) / ((Fb - Fa) * (Fb - Fc))

Fm = f(m)

Ca = Fa / (A - m) / (A - B)
Cm = Fm / (m - A) / (m - B)
Cb = Fb / (B - A) / (B - m)

K1 = (Ca + Cm + Cb)
K2 = (Ca * (m + B) + Cm * (A + B) + Cb * (m + A))
K3 = (Ca * m * B + Cm * A * B + Cb * m * A)

Sum = Sum + Incr * (K1 / 3 * (B ^ 2 + A * B + A ^ 2) - K2 / 2 * (B + A) + K3)

A = B
Loop Until A >= Blast
Area = Sum

End Function

The parameters for function Area are:

1) A and B that define the integration interval.
2) N is the number of divisions within the integration intervl. Can be any positive value.

I tested the above code with f(x)=x and f(x)=1/x and obtained good results.

My rational behind this algorithm is the integration of function that experience steep changes. That i why I seek the value of x=m instead of x+(a+b)/2.

Enjoy!

Namir
(05-30-2018 01:39 PM)Namir Wrote: [ -> ]I tested the above code with f(x)=x and f(x)=1/x and obtained good results.

Namir, I have tried your code in VBA for Excel, and it seems to work fine. But for f(x)=1/x (your example) something happens that I do not understand.

Let's assume a=1 and b=2 so that the true result is ln 2 = 0,693147180559945...

n = 10 => 0,693146401483427
n = 20 => 0,693147131766058
n = 30 => 0,693147170917894
n = 40 => 0,705569697497420 (!!)
n = 50 => 0,693147179310086
n = 60 => 0,701445982770560 (!!)
n = 70 => 0,700264648002835 (!!)
n = 80 => 0,699377730119543 (!!)
n = 90 => 0,698687360816317 (!!)
n=100 => 0,693147180481822

So the results first get more and more accurate as n increases, and the error is about 1/4 of a standard Simpson method.
But then some results are way off. For instance for n=10 and n=16 the results are fine. For n=12 and n=14 they are off.
Likewise n=20 and n=24 are fine, n=22 is not.

What's going on here?
Is there a special restriction for the value of n? You said it can be any positive value.

EDIT: I think I found the problem. It's the exit condition of the loop. Due to roundoff errors the >= test does not test true if "a" is sliiiiightly less than "Blast". Testing floating point values for equality always is a bad idea. Instead you should check if the difference is below a certain threshold. But this is not required here: After replacing the DO-loop with a FOR-loop everything works fine. I think you should adjust your program accordingly:

Replace the DO line with FOR I=1 TO N
Replace the LOOP UNTIL... line with NEXT I

After this adjustment the program seems to work fine.
For the example function the accuracy is comparable to a standard Simpson method with √2 n intervals.

Dieter
Thanks Dieter I made the changes that you suggested.

My own (very limited) testing shows that the new algorithm generally yields more accurate results than Simpson's Rule. Of course this advantage comes at a computational price (I used old compact Simpson code that we had discussed a few years back on this forum). So increased accuracy comes at the cost of more CPU work. Fare enough!

Here is the updated VBA code:

Quote:Function f(ByVal X As Double) As Double
f = Exp(X) - 3 * X ^ 2
End Function

Function Area(ByVal A As Double, ByVal B As Double, ByVal N As Integer)
Dim Sum As Double
Dim Blast As Double, Fa As Double, Fb As Double, Fc As Double, Fm As Double
Dim Ca As Double, Cb As Double, Cm As Double
Dim K1 As Double, K2 As Double, K3 As Double
Dim Incr As Double, m As Double, C As Double
Dim I As Integer

Blast = B
Incr = (B - A) / N
Sum = 0
For I = 1 To N
B = A + Incr
C = (A + B) / 2
Fa = f(A)
Fb = f(B)
Fc = f(C)
Fm = (Fa + Fb) / 2

m = A * ((Fm - Fc) * (Fm - Fb)) / ((Fa - Fc) * (Fa - Fb)) + _
C * ((Fm - Fa) * (Fm - Fb)) / ((Fc - Fa) * (Fc - Fb)) + _
B * ((Fm - Fa) * (Fm - Fc)) / ((Fb - Fa) * (Fb - Fc))

Fm = f(m)

Ca = Fa / (A - m) / (A - B)
Cm = Fm / (m - A) / (m - B)
Cb = Fb / (B - A) / (B - m)

K1 = (Ca + Cm + Cb)
K2 = (Ca * (m + B) + Cm * (A + B) + Cb * (m + A))
K3 = (Ca * m * B + Cm * A * B + Cb * m * A)

Sum = Sum + Incr * (K1 / 3 * (B ^ 2 + A * B + A ^ 2) - K2 / 2 * (B + A) + K3)

A = B
Next I
Area = Sum

Namir
(05-30-2018 08:17 PM)Namir Wrote: [ -> ]My own (very limited) testing shows that the new algorithm generally yields more accurate results than Simpson's Rule.

I have only tested two functions (1/x and sin(x)), and here the results with your method are actually more accurate than those with a standard Simpson method. In my test cases the latter required about 40% (1/x) or 15% (sin(x)) more intervals to achieve the same accuracy level. On the other hand your method requires four function calls per loop where the classic method needs only one.

But the Simpson method can be improved. Some time ago I proposed a variation that calculates the Simpson estimates for n and n/2 intervals, which are then combined into a final result that is more precise than the better of the two.

Example: the integral of 1/x from 1 to 2.

Your method with n=40: error ~–3,1 E–9
Simpson method with n=40: error ~ 1,2 E–8
Simpson method with n=56: error ~ 3,2 E–9

Simpson with n=40: error ~ 1,2 E–8
Simpson with n=20: error ~ 1,9 E–7
Combined result: error ~ 6 E–11

So you can see that your method performs better with the same number of intervals. What it does with n=40 requires n=56 with the standard Simpson method. However, your method needs ~160 function calls as opposed to 56 with the standard method.

Finally, with merely 40 loops (and function calls) the combined method achieves an accuracy level that requires more than 100 loops (and 400 function calls) with the new method. But that's what you already noted:

(05-30-2018 08:17 PM)Namir Wrote: [ -> ]Of course this advantage comes at a computational price (I used old compact Simpson code that we had discussed a few years back on this forum). So increased accuracy comes at the cost of more CPU work. Fare enough!

Dieter
Two questions come to mind. First, what is the error expansion (perhaps by way of a Taylor series), and second, how much doe the method cost. Could the extra cost be used to move to a higher (or extrapolated like Richardson's method) integration method.

Generally, three point rules like Simpson's are just as good (in order of magnitude) as the one with one more point (Bode's rule in this group.) Thus, another functional evaluation may not be that valuable if used in a simple integration formula.

Another problem is in round off. At low degree Newton-Cotes formulas, no problems arise but after 8 or so points, the coefficients change sign and get really big.
(05-31-2018 02:27 AM)ttw Wrote: [ -> ]Two questions come to mind. First, what is the error expansion (perhaps by way of a Taylor series), and second, how much doe the method cost. Could the extra cost be used to move to a higher (or extrapolated like Richardson's method) integration method.

I have tackled the issue of enhancing the Romberg method by replacing the initial guess for the integral that uses the Trapezoidal method with using Simpson's rule. The enhanced version of Romberg works better. Locate the link "A New Face of Romberg Integration" on this link. The link right next to it offers an Excel file that shows the VBA code for the enhanced Romberg.

Namir
(05-31-2018 02:27 AM)ttw Wrote: [ -> ]Generally, three point rules like Simpson's are just as good (in order of magnitude) as the one with one more point (Bode's rule in this group.) Thus, another functional evaluation may not be that valuable if used in a simple integration formula.

I agree. I had noticed that Simpron's 1/3 rule is not that inferior to Simpson's 3/8 rule. With the computing power we have we can afford to use Simpron's rules with a high number of interval divisions. We can also use the method I am proposing in this thread. Such a method would have been abandoned my mathematicians who did not have access to computers. They kept their algorithms simple.
The first step of Romberg integration yields Simpson's rule. You should get the same results (if the programs both use the same number of points.)

https://en.wikipedia.org/wiki/Romberg%27s_method

The real problem with Romberg is that the precision increases faster the number of digits a computer has. A 128th order convergent method isn't really useful without lots of significant digits. (That's why I use Monte Carlo with either quadratic or cubic convergence, if possible; beats the usual square root order.)

Romberg, Gaussian quadrature (and it's extended family), etc., tend to fail in higher dimensions due to the Curse of Dimensionality. One can postpone the damage by using the hyperbolic cross-points; sacrificing a rooster during the dark of the moon doesn't help in this case.
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :