The Museum of HP Calculators

HP Forum Archive 18

 Numerical Integration on the 35SMessage #1 Posted by Kiyoshi Akima on 21 Oct 2008, 1:42 p.m. Could someone with a 35S and five spare minutes use that machine to integrate FRAC(x) from 0.0 to 6.4 in FIX 2 and FIX 4 and post the results here, along with approximate run times? For the timing, an order of magnitude will be sufficient (i.e. "less than a second", "ten seconds or so", "about a minute", etc). This isn't a trick question. The integrand is periodic and singular. It has a simple analytic solution. I'm curious to see what that machine does on a simple, naive approach to this problem.

 Re: Numerical Integration on the 35SMessage #2 Posted by Scott Newell on 21 Oct 2008, 3:33 p.m.,in response to message #1 by Kiyoshi Akima Mine spits out 1.28 after about 2 seconds. (No difference in result or runtime in fix 2 or fix 4.) Edited: 21 Oct 2008, 3:38 p.m.

 Re: Numerical Integration on the 35SMessage #3 Posted by Kiyoshi Akima on 21 Oct 2008, 6:26 p.m.,in response to message #2 by Scott Newell Thanks. That's about what I expected, given the results on other machines I've tried this on. Unfortunately, the analytical solution gives 3.08 .

 Re: Numerical Integration on the 35SMessage #4 Posted by Karl Schneider on 22 Oct 2008, 2:10 a.m.,in response to message #3 by Kiyoshi Akima Kiyoshi -- Welcome back! My first challenge was finding the FP (Fractional Part) function on the HP-35s. It's under the "INTG" menu. I was somewhat surprised that the terrible answers never got any better, and the execution time hardly increased -- no matter many decimal digits were selected. The HP-32S, HP-15C, and HP-42S gave identical results. The problem is the discontinuities -- vertical drop-offs at a single point -- in the sawtoothed integrand function, of which there are six. These are not identified at all by the sampling over the entire range from 0.0 to 6.4 that includes a "partial period" from 6.0 to 6.4. However, the sampling is much closer to those singular points if the integration is performed between 0.0 and 6.0; the integration takes considerably longer and is accurate percentage-wise, but not quite within the estimated error. Of course, the best way to deal with this situation is to subdivide the integral such that the endpoints are at the discontinuities, and use Sigma+ to aggregate the results. All of this is discussed in detail within the HP-34C and HP-15C handbooks, available on the MoHPC DVD/CD sets. Names of these references are listed in my one (and only) Forum Article #556: The methods of numercial integration on the HP-35s are no different from those of predecessor models. -- KS Edited: 22 Oct 2008, 5:26 a.m.

 Re: Numerical Integration on the 35SMessage #5 Posted by Kiyoshi Akima on 22 Oct 2008, 1:55 p.m.,in response to message #4 by Karl Schneider I'd just hoped that HP had made some improvements in the integration routine in the nearly thirty years between the 34C and the 35S :-( I originally stumbled across this when I tried integrating FRAC(x) from 0 to 6.532151583 with various integrators in the hopes of getting something approximating pi and got 1.738 with the build-in integrator instead. Not a very good approximation, is it? Nyquist tells us we need to sample at least thirteen points. However, the first three approximations using seven points agree to ten digits, and there's no way to force the built-in integrator to use more points. Incidentally, the TI-84+SE produces a reasonable answer.

 Re: Numerical Integration on the Palm.Message #6 Posted by Pal G. on 22 Oct 2008, 4:49 p.m.,in response to message #5 by Kiyoshi Akima Kiyoshi, Up to this point, I have more experience with Palm math software than HP calcs, so I hope this post helps with your integration research. First, EasyCalc has two built-in integration functions. It can approximate integrals using Simpson's Rule: ```fsimps(min:max:func{:error{:params}}) fsimps(0:6.532151583:"fpart(x)":1E-6) ans: 3.152648745845178 time: 1s (Palm TX) ``` , or using Romberg Integration: `fromberg(min:max:func{:degree{:params}})` (note: degree - an optional integer parameter that specifies the degree of the polynomial used to approximate the function. I experimented with various values and report the best below). ```fromberg(0:6.532151583:"fpart(x)":11) ans: 3.141452262422343 time: 1s ``` ... Seperately, another Palm program PowerOne Graph, uses the Gauss-Kronrod Method, which gave best results. ```fnInt ("expression"; "variable"; lower; upper) fnInt("fPart(x)";"x";0;6.532151583) ans: 3.141592653645 time: ~15 seconds. ``` Best, PG Edited: 22 Oct 2008, 4:50 p.m.

 Re: Numerical Integration on the Palm.Message #7 Posted by Kiyoshi Akima on 22 Oct 2008, 5:38 p.m.,in response to message #6 by Pal G. I believe the TI calculators use the Gauss-Kronrod. Personally, I use the Gauss-Bond method, which is generally superior to the Gauss-Kronrod, giving better results for the same amount of work (or the same results with less work). However, there is no such thing as the best method, and I know that. As I said earlier, I am disappointed that HP hasn't made any improvements in the nearly thirty years between the 34C and the 35S.

 Re: Numerical Integration on the Palm.Message #8 Posted by Egan Ford on 22 Oct 2008, 7:16 p.m.,in response to message #6 by Pal G. Why did you select 6.532151583? What is the significance of 0.532151583 other than its (2*(Pi-3))^.5? Thanks.

 Re: Numerical Integration on the Palm.Message #9 Posted by Karl Schneider on 22 Oct 2008, 11:44 p.m.,in response to message #8 by Egan Ford Quote: Why did you select 6.532151583? What is the significance of 0.532151583 other than it's (2*(Pi-3))^.5? Egan -- I think that was the point. The value of the integral is the sum of all the right triangles, each of which has an area of (1/2)*(base)2. So, the first six triangles have a total area of 3. If the base of the last "partial" triangle is sqrt(2*(Pi-3)), then its area is pi-3, making the total area equal to pi, which was Kiyoshi's original intent that Pal was addressing. -- KS Edited: 22 Oct 2008, 11:48 p.m.

 Re: Numerical Integration on the Palm.Message #10 Posted by Egan Ford on 23 Oct 2008, 10:06 a.m.,in response to message #9 by Karl Schneider I figured as much, but was curious if 6.532151583 had any other meaning.

 Re: Numerical Integration on the Palm.Message #11 Posted by Pal G. on 25 Oct 2008, 2:12 a.m.,in response to message #9 by Karl Schneider Karl or Egan, Seeing Egan's conversion of the original 0.532151583 to (2*(pi-3))^0.5 reminded me of EasyCalc's "GuessIt" function, which tries to return a rational form of the last answer. However, EasyCalc fails to return a number in the above case. Some searching through HP docs revealed the 50g has a few functions, such as ->Q, ->Qpi, and XQ, and at hpcalc.org I found the libs QPI and QXRACINE, which all, I presume, attempt to do the same. However, none of these returned (2*(pi-3))^0.5 (otherwise I would not be typing this). The best I saw was 10237/19237. Did Egan find (2*(pi-3))^0.5 with textbook methods or software is my question? Thanks, PG Edited: 27 Oct 2008, 8:40 a.m.

 Re: Numerical Integration on the 35SMessage #12 Posted by Karl Schneider on 23 Oct 2008, 2:10 a.m.,in response to message #5 by Kiyoshi Akima Kiyoshi -- I, too, am surprised that the HP's Romberg method sometimes returns such a wrong answer for this problem using an approach you correctly call "naive". The results depend on limits of integration -- 6.4 fails, but 6.39 seems to work OK, if slowly and lacking pinpoint accuracy. I also found that the 1993 TI-82 (using Gauss-Kronrod method) handled this problem well: ```fnInt(fPart(X),X,0,6.4) ``` yielded 3.080. Between the discontinuities, this integrand is easy -- it's a straight line with slope of +1. Specifying limits that don't span a discontinuity will give perfect answers quickly. A "smart" approach upon encountering an ill-behaved portion of the integrand would employ dense sampling there. HP's method, to my understanding, will double the number of samples over the entire integrand if more work needs to be done. That approach is logical, because if there is one problematic area, there just might be more of them. Transformation of the integrand function to something "smoother" (if possible) and intelligent subdivision will help to give accurate answers that can be trusted. -- KS

 Re: Numerical Integration on the 35SMessage #13 Posted by V-PN on 22 Oct 2008, 3:26 a.m.,in response to message #2 by Scott Newell Exactly how did you do it?

 Re: Numerical Integration on the 35SMessage #14 Posted by Scott Newell on 22 Oct 2008, 11:01 a.m.,in response to message #13 by V-PN Quote: Exactly how did you do it? Set fix 2 or 4, put the limits of integration on the stack (0 to 6.4), went in to equation mode and entered FP(X), then told it to integrate with respect to X. (I had to look up the frac function in the book--I couldn't find it on the calc!) Did I miss something? 0-6.39, fix 2 takes significantly longer (~15s) to integrate and returns 3.06. 0-6.399, fix 3 returns 3.082 in about 2 minutes.

 Re: Numerical Integration on the 35SMessage #15 Posted by V-PN on 22 Oct 2008, 3:19 p.m.,in response to message #14 by Scott Newell ?? select FN= X

 Re: Numerical Integration on the 35SMessage #16 Posted by Scott Newell on 22 Oct 2008, 4:38 p.m.,in response to message #15 by V-PN Quote: ?? select FN= X OK, if you want to integrate a program: ``` LBL B INPUT X FP RTN ``` Select program B as the function to be integrated: FN = B Enter the limits of integration: 0, enter, 6.4 Integrate, with respect to X

 Re: Numerical Integration on the 35SMessage #17 Posted by V-PN on 23 Oct 2008, 6:30 a.m.,in response to message #16 by Scott Newell [pre]Thanks INPUT X was missing in my trial &)=p8!%!%#¤%&* >:-( I even translated the 35s Quick Guide to Finnish AND I still have problems maybe because during the years for my integrating and solving I have used these: 15C, 18C, 19B/BII, 28C/S, 42s, 41C/CV/CX+Advantage+.. 48s/SX+Solve, 48G/GX, 48gII/49G/g+/50G, 71B+Math+.., BUT the 32S/SII, 33s, 35s are slightly unfamiliar to me

 Re: Numerical Integration on the 35SMessage #18 Posted by Egan Ford on 22 Oct 2008, 6:18 p.m.,in response to message #14 by Scott Newell Quote: 0-6.39, fix 2 takes significantly longer (~15s) to integrate and returns 3.06. 0-6.399, fix 3 returns 3.082 in about 2 minutes. My supercharged 15C results: ```0- 6.39, fix 2 returns 3.01 in 0 seconds. 0- 6.399, fix 3 returns 3.103 in 2 seconds. 0- 6.3999, fix 4 returns 3.0854 in 7 seconds. 0- 6.39999, fix 5 returns 3.08004 in 10 < min < 20 (I forgot to watch) ``` Looks like the 35s has improved over the 15C in this area.

 Re: Numerical Integration on the 35SMessage #19 Posted by Scott Newell on 22 Oct 2008, 7:04 p.m.,in response to message #18 by Egan Ford Quote: My supercharged 15C results: ```0- 6.39, fix 2 returns 3.01 in 0 seconds. 0- 6.399, fix 3 returns 3.103 in 2 seconds. 0- 6.3999, fix 4 returns 3.0854 in 7 seconds. 0- 6.39999, fix 5 returns 3.08004 in 10 < min < 20 (I forgot to watch) ``` Looks like the 35s has improved over the 15C in this area. I just tried 0- 6.39999, fix 5: the 35S failed at about 9 minutes with a "nonexistent" error.

 Re: Numerical Integration on the 35SMessage #20 Posted by Egan Ford on 22 Oct 2008, 7:57 p.m.,in response to message #19 by Scott Newell Quote: I just tried 0- 6.39999, fix 5: the 35S failed at about 9 minutes with a "nonexistent" error. A hardcoded timeout perhaps?

 Re: Numerical Integration on the 35SMessage #21 Posted by C.Ret on 22 Oct 2008, 6:29 p.m.,in response to message #14 by Scott Newell Hi, Interesting problem du to the discontinuities that produce unexpected results into the integration algorithm. Apparently, the "new" integration algorithm has the same difficulties on RPL calculator! I just try the suggested values on my HP-28S and get the following results, which are close to the observations made on RPN's one. ```3: \<< FP \>> 2: { 0 6.4 } 2: 1.2800 1: .0001 ---\.S---> 1: 0.0001 in less than 1s. ``` A bad (but fast) estimation 1.28 instead of the expected 3.08 ! But the worse is just coming: ```3: \<< FP \>> 2: { 0 6.39 } 2: 3.0763 1: .0001 ---\.S---> 1: 0.0003 in nearly 7 min. ``` Better guess but a bit long (more than 7 min !). ```3: \<< FP \>> 2: { 0 6.399 } 2: 3.0792 1: .0001 ---\.S---> 1: 0.0003 in more than 32 min. ``` Best guess but far longer... I am not convince that the integration routine is different on RPL calculator compare to the previous RPN version... It takes so much time, that I had time to type an Simpson’s method on my old SHARP PC-1211 ( Approximate definite integral by Simpson’s rule - P4-A-11 – p.25-26 SHARP Pocket Computer PC-1211 Applications Manual – SHARP CORPORATION , Osaka, Japan 1F10T(1010) ) By this method, on my old pocket, I get a reasonable answer 3.072 in less than 3 min (100 divisions).

 It's All About Romberg IntegrationMessage #22 Posted by Les Wright on 23 Oct 2008, 6:30 a.m.,in response to message #1 by Kiyoshi Akima I have a little Mathematica program that does Romberg Integration the straight-up textbook way--repeated applications of the Trapezoid Rule followed by Richardson Extrapolation. The routine spits out 1.28 for your problem! The Romberg routine encoded in our HP calcs for the past few decades just doesn't agree with this problem.... I wonder where it breaks down? Les

 Re: It's All About Romberg IntegrationMessage #23 Posted by Kiyoshi Akima on 23 Oct 2008, 2:27 p.m.,in response to message #22 by Les Wright The Romberg algorithm breaks down in this case in the spurious agreement of early results. Taking the "straight-up textbook way": ```F0 = FRAC(0.0) = 0.0 F1 = FRAC(1.6) = 0.6 F2 = FRAC(3.2) = 0.2 F3 = FRAC(4.8) = 0.8 F4 = FRAC(6.4) = 0.4 T1 = (F0+F4)*(6.4/2) = 1.28 T2 = (F0+2F2+F4)*(6.4/4) = 1.28 ``` Since these two values agree, the algorithm "decides" there's no need to continue. Had it continued, it would have found that: ```T3 = (F0+2(F1+F2+F3)+F4)*(6.4/8) = 2.88 ``` which doesn't agree with the previous value and thus gone on, using more and more function evaluations, until the estimates (aided by the Richardson Extrapolation) converge. This is the behavior seen when the upper limit is changed to 6.3, 6.39, 6.399, etc. In the case of the built-in schemes of all HPs (with built-in schemes, of course), RPL and RPN alike, a nonlinear transformation is performed in an usually successful effort to avoid the effects of periodic functions. In this particular case it fails, as the third estimate also agrees with the first two, "convincing" the scheme that it has found the correct answer. I didn't start out to find this. As I stated earlier, I was simply comparing various schemes on an integral that should have evaluated to approximately pi. It was only when the built-in scheme came out that wildly off that I dug in deeper. Of course, the "correct" way to numerically evaluate this integral, assuming one had to do it numerically, is as: ```6*INTG(0,1,FRAC(x),x) + INTG(0,0.4,FRAC(x),x) ``` exploiting the periodicity and at the same time removing the singularities.

 Re: It's All About Romberg IntegrationMessage #24 Posted by Karl Schneider on 24 Oct 2008, 9:24 a.m.,in response to message #23 by Kiyoshi Akima Kiyoshi -- Nice explanation. I'm sure that this particular example was not anticipated by the original designers. Quote: In the case of the built-in schemes of all HP's (...), RPL and RPN alike, ... There's at least one HP model with factory-provided numerical integration as a standard feature in its ROM, but it's not RPN/RPL and is not micro-coded, either. The AOS HP-20S offers rootfinding and Simpson's Rule integration as loadable AOS keystroke routines. The results get close to the correct answer with a substantial number of samples (even number only), and improve as more samples are taken, but it just can't seem to settle on the exact answer of 3.08. ```10 samples: 3.41 20 samples: 3.20 64 samples: 2.88 100 samples: 3.072 200 samples: 3.072 650 samples: 3.08512820513 2000 samples: 3.07413333333 ``` The HP-20S rootfinding and integration capability is unintuitive and somewhat impractical. Say, for example, the user programs a function to evaluate it at a few points, then decides to root-solve or integrate the function. Loading the "root" or "int" program will clear program memory, wiping out the program containing the user's function in the process. I've wondered whether other algorithms, such as the Gauss-Kronrod that TI uses, could be adapted by HP while incorporating its traditional refinements (generally not evaluating at endpoints, irregular sampling). -- KS

Go back to the main exhibit hall