The Museum of HP Calculators

HP Forum Archive 20

[ Return to Index | Top of Index ]

Integration
Message #1 Posted by Paul Dale on 3 Nov 2011, 7:52 a.m.

As a result of providing local registers for subroutines and programs in the 34S, I'm looking at redoing the internal integration routine. The current Gauss-Kronrod quadrature was chosen primarily because it would fit into the available five registers. With locals this can be relaxed and something more sophisticated is possible.

I'm eyeing off the PPC IG routine as a possibility, but would like to know if anyone here has a pet keystroke integration routine of some kind of another.

The current integrator consumes flash equivalent to about 230 program steps (this includes both the program and the tables of constants required) so something of similar magnitude (or smaller) would be perfect.

Any suggestions anyone? Better still, any keystroke programs :-)

- Pauli

      
Re: Integration
Message #2 Posted by Marcus von Cube, Germany on 3 Nov 2011, 9:09 a.m.,
in response to message #1 by Paul Dale

Just to add some info about the current development:

Local registers, local flags, recursion, nested solve/integrate/sum/prod, ...

And a new keyboard layout is active now.

      
Re: Integration
Message #3 Posted by Kevin Colville (S. Africa) on 3 Nov 2011, 11:50 a.m.,
in response to message #1 by Paul Dale

My pet routine is a modified midpoint rule coupled with Richardson acceleration, ie, Romberg's method.

This code is similar to the method devised for the HP-34C by W.H. Kahan as described in HP Journal, Aug 1980. (This integration routine was later used in the 15c, 42s, ...)

I adapted Peter Henrici's version (from his book Computational Analysis with the HP-25 Pocket Calculator) to the 21s. It's about 70 steps, uses all ten registers of the 21s and was quite challenging because the 21s lacks an index register and only has x==0? and x<=y? tests. I'm sure there are better versions of Romberg for more advanced models without the limitations of the 21s (and the HP-25).

Kevin

            
Re: Integration
Message #4 Posted by Paul Dale on 3 Nov 2011, 5:39 p.m.,
in response to message #3 by Kevin Colville (S. Africa)

I've got that book and will check it out.

- Pauli

      
Re: Integration
Message #5 Posted by Kiyoshi Akima on 3 Nov 2011, 4:14 p.m.,
in response to message #1 by Paul Dale

My personal preference would be for some kind of locally-adaptive routine. I've always considered the lack of local adaptivity one of the most serious flaws in the Romberg-based routine used from the HP 34C to the HP 50g (and the 15C LE, if I'm going to mention the most chronologically recent).

There are plenty of locally adaptive algorithms based on Newton-Cotes and Maclaurian methods, as well as Gaussian.

As an alternative to the Gauss-Kronrod I'd suggest considering Gauss-Bond instead: http://www.crbond.com/integration.htm

G-B requires no more work than G-K and attains a higher-order for the same number of function evaluations.

Also, if you include the nonlinear argument transformation as done in the HP and PPC-ROM routines, please consider a flag to turn it off.

            
Re: Integration
Message #6 Posted by Paul Dale on 3 Nov 2011, 5:40 p.m.,
in response to message #5 by Kiyoshi Akima

I didn't know about G-B quadrature. I don't think I'll implement it however. I think some kind of adaptive method is better value for bytes.

Any specific suggestions (or code) for a locally-adaptive routine?

- Pauli

                  
Re: Integration
Message #7 Posted by Kiyoshi Akima on 3 Nov 2011, 6:31 p.m.,
in response to message #6 by Paul Dale

If you've already got a G-K routine then it shouldn't take much effort to change it to a G-B. Mostly changing the constants.

I've got a bunch of locally-adaptive routines I'm playing with, but they're all in SysRPL for the 50g. They rely on recursion so I don't think they'll port well to the 34S. I've got G-B, Newton-Cotes, and Maclaurian. If you'd like, I can send you the documented SysRPL sources.

IIRC there's a FORTRAN routine in Conte & de Boor's "Elementary Numerical Analysis: An Algorithmic Approach". It's an oldie which I used as a textbook back in my student days and I remember programming my 41C to do it, but that program used synthetics to expand the return stack.

                        
Re: Integration
Message #8 Posted by Paul Dale on 3 Nov 2011, 6:50 p.m.,
in response to message #7 by Kiyoshi Akima

I understand that G-B will be a switch the constants exercise, but it will still be static. I.e. small gain for effort.

The 34S will handle recursion reasonably in version 3. Not as well as in RPL but okay.

- Pauli

                  
Re: Integration
Message #9 Posted by Kiyoshi Akima on 7 Nov 2011, 2:31 p.m.,
in response to message #6 by Paul Dale

Below is the UserRPL code for a crude non-recursive globally convergent adaptive quadrature routine on a 50g. It uses a second-order open Newton-Cotes formula (25.4.22 in Abramowitz & Stegun).

It's globally convergent in that it only requires that the total error satisfies the criteria, instead of requiring that each subinterval meet its "quota" of error. Thus, if 90% of the error occurs in 10% of the interval but the other 90% of the interval only contributes 10% of the error, then the routine is satisfied. A locally convergent routine would require the first 10% of the interval contribute no more than 10% of the error tolerance.

I slapped the code together in an evening, so I'm sure it's far from optimal. It also doesn't do any error- or limit-checking.

lower_limit upper-limit Function error_tolerance -> sum

<< -15. * ROT 4. PICK - { } << NOVAL DUP DUP2 -> a h y2 y4 y6 y1 y3 y5 y7 << IF 10 FS? THEN a 1. DISP a h + 2. DISP END a h .125 * + <-f EVAL DUP 'y1' STO a h .375 * + <-f EVAL DUP 'y3' STO a h .625 * + <-f EVAL DUP 'y5' STO a h .875 * + <-f EVAL DUP 'y7' STO + + + DUP + y2 - y6 - h * 6. / y2 y6 + DUP + y4 - h * 3. / OVER - ABS NEG SWAP a h y1 y2 y3 y5 y6 y7 10. ->LIST 1. ->LIST >> >> -> a <-f e h d r << a h DUP2 .25 * + <-f EVAL a h .5 * + <-f EVAL a h .75 * + <-f EVAL r EVAL 'd' STO+ WHILE 0. 1. d SIZE FOR i d i GET HEAD + NEXT e < REPEAT d SORT DUP TAIL SWAP HEAD LIST-> DROP 8. ROLL 8. ROLL .5 * 'h' STO 'a' STO h a OVER + SWAP 5. ROLLD 5. ROLLD r EVAL 6. ROLLD a h 5. ROLLD 5. ROLLD r EVAL UNROT DROP2 + + 'd' STO END 0. 1. d SIZE FOR i d i GET 2. GET + NEXT >> >>

            
Re: Integration
Message #10 Posted by Kevin Colville (S. Africa) on 4 Nov 2011, 4:56 a.m.,
in response to message #5 by Kiyoshi Akima

Romberg has the advantage that it doesn't need a table of weights and abscissae like the Gauss-* methods. At least the 34s has more space for that then earlier calculators. On the other hand Romberg can require a *lot* of function evaluations.

Adaptive methods require a trade-off between using a large enough table (can be held in ROM) for the base G-* method and enough return levels (in RAM) for the recursion. So it depends very much on the resources available in the 34s.

Quadrature is hard: everyone has their zoo of pathological examples that numerical methods trip over. Whether wildly fluctuating or steep singularities. Kahan's HP-34c article, the advice in several HP manuals and, of course, the 15C Advanced Functions manual all address these.

Good luck.

                  
Re: Integration
Message #11 Posted by Paul Dale on 4 Nov 2011, 5:20 a.m.,
in response to message #10 by Kevin Colville (S. Africa)

Space use is easy. The current integration uses the equivalent of 230 steps of program. Any constant is equivalent to four steps.

The PPC IG routine is well under half of this for a Romberg implementation.

- Pauli

            
Re: Integration
Message #12 Posted by Wes Loewer on 4 Nov 2011, 3:34 p.m.,
in response to message #5 by Kiyoshi Akima

Quote:
My personal preference would be for some kind of locally-adaptive routine. I've always considered the lack of local adaptivity one of the most serious flaws in the Romberg-based routine used from the HP 34C to the HP 50g (and the 15C LE, if I'm going to mention the most chronologically recent).

I agree. The Romberg method made perfect sense for the 34C with its limited memory. In the article, Kahan states: "For example, consider Gaussian quadrature. This method is widely regarded as 'best' in the sense that it very often requires fewer samples than most other methods to achieve an average A that approximates the desired A to within some preassigned tolerance." He goes on to explain why Gauss was not used as it would have taken all the calculator's memory to store the necessary nodes and weights.

Now that calculators have more memory, the widely used Gauss-Kronrod method seems well suited. I don't know how tight memory is on the 34S, but if you can fit something like Gauss-Kronrod onto it, it would be worth it. Romberg works fine for well-behaved functions, but it really bogs down on functions with cusps or corners. It reiterates deeper and deeper on the entire function intead of just where the problem is.

Functions with corners from absolute values are common when calculating the distance traveled given the velocity function: d = integral abs(v(x)) dx.

I came across such an integral on a standardized exam that would have taken over 30 minutes on the 50g in the default STD mode. (Students had 50 minutes to answer 17 calculator based questions on that section of the exam.) Setting the mode to FIX 5 reduces the time to about 30 seconds, but for comparison, the slower ti-84+, which uses the 15 node GK method, takes about 6 seconds.

The recursive 15 node Gauss-Kronrod method is commonly used, but if memory is tight, fewer nodes could be used. Because of symmetry, the 15 node GK requires storage of 8 node values and 12 weight values for a total of 20 values. A 7 node version would require 4 node values and 6 weights for a total of 10 values.

~wes

                  
Re: Integration
Message #13 Posted by Paul Dale on 4 Nov 2011, 6:18 p.m.,
in response to message #12 by Wes Loewer

The 34S currently uses a 10/21 point Guass-Kronrod quadrature. The weights and nodes are just under half the space occupied by the routine. The big advantage of this integrator is the forty bytes of RAM it requires.

G-K is very good for well behaved functions but an adaptive method is preferable I feel. A recursive G-K method could be used but this will increase memory requirements & we're really tight on both RAM and flash. Romberg would consume less flash but more RAM.

All said, a wrapper has been written for the built in integrator to act more adaptively.

- Pauli

                        
Re: Integration
Message #14 Posted by Kiyoshi Akima on 7 Nov 2011, 2:16 p.m.,
in response to message #13 by Paul Dale

A 10/21 G-K, eh? That's exact for order-32 polynomials. A 9/19 G-B would be exact for order-37 polynomials, with two fewer function evaluations. Admittedly, it would cost two additional constants and eight additional multiplications and sixteen additional additions, though the extra operations should just be a matter of changing loop limits and not additional code.

A 10/21 G-K starts with a 10-point Gauss-Legendre (G-L) quadrature, which is "optimal" for 10 function evaluations. An additional 11 points are then selected so that, in combination with the original 10, we have an optimal 21-point expansion (note that this is NOT an optimal 21-point quadrature, just optimal using the 10 original points). The two quadratures then provide an error estimate. If the estimate is "good enough" then the 21-point result is accepted. Otherwise the interval is subdivided and the process repeated on each subinterval.

G-B goes the other way. If we're going to evaluate the function 21 times anyway, let's do the best job of it we can, which is a 21-point G-L. We then pick an appropriate subset of those 21 points to evaluate another result so we can estimate the error. Alternatively, we can use fewer points, such as a 9/19 G-B and still get a "better" result.


[ Return to Index | Top of Index ]

Go back to the main exhibit hall