The Museum of HP Calculators

HP Forum Archive 17

 Re: BugsMessage #2 Posted by Kiyoshi Akima on 12 Sept 2007, 12:17 p.m.,in response to message #1 by Palmer O. Hanson, Jr. Sentry CA756 This is their high-end graphing calc: the package says "Compare with TI-83." I picked one up cheap to play with, not having played with anything comparable to a TI for decades. sin(PI) gives zero. What's even worse is that int(PI) does not produce an integer. Apparently the built-in value for PI includes some extra guard digits and they don't get stripped out properly, leaving 3 + a small value. int(acos(-1)) has a similar problem, as does int(x*PI) for many (but not all) values of x. The random number generator isn't. Multiple invocations within a single program run return the same result, unless the program halts for display. Run the program a second time and you get a different number, which then repeats.

 Re: BugsMessage #3 Posted by Meenzer on 12 Sept 2007, 12:46 p.m.,in response to message #2 by Kiyoshi Akima Quote: sin(PI) gives zero. What's even worse... Shouldn't it?

 Re: BugsMessage #4 Posted by Kiyoshi Akima on 12 Sept 2007, 12:59 p.m.,in response to message #3 by Meenzer Does your calculator give zero for sin(PI)? If you have an infinitely precise value for pi, then the sine of that value is indeed 0. But since the calculator is attempting to produce the value of a ten-digit (say) approximation of pi, the *correct* result should be the sine of that approximation.

 Re: BugsMessage #5 Posted by Meenzer on 12 Sept 2007, 1:12 p.m.,in response to message #4 by Kiyoshi Akima Well, indeed, my TI and Casio calculators, the HP 48G as well as some PC programs produce sin(pi)=0. My HP 15c and HP 35s produce somevalueEE-13... Edited: 13 Sept 2007, 9:10 a.m. after one or more responses were posted

 Re: BugsMessage #6 Posted by Kiyoshi Akima on 12 Sept 2007, 1:14 p.m.,in response to message #5 by Meenzer But how many of them produce a non-integral result for int(PI)?

 Re: BugsMessage #7 Posted by Meenzer on 12 Sept 2007, 1:22 p.m.,in response to message #6 by Kiyoshi Akima The ones that have an integer function yield an integer ;-)

 Re: BugsMessage #8 Posted by Karl Schneider on 12 Sept 2007, 11:47 p.m.,in response to message #1 by Palmer O. Hanson, Jr. Welcome back, Palmer! Quote: TI-55II: ... There was a problem in the statistics routine such that if the user entered 4, 5, 6, 7, and 8 the mean would be displayed as 6 but the value in the machine was actually 6 - 1E-10 ! This was caused by use of a different algorithm for statistics accumulation where, for example, the sum of the input values was not stored, but rather the current mean and the number of entries was stored. V8N1P26-27 of TI PPC Notes). I suspect that the running calculation of mean averages might have been intended to prevent possible errors in calculation of standard deviation of large values, due to roundoff errors of summation. For example, the HP models -- even the 12-digit Pioneer-series -- that calculate standard deviation from summation registers will return 0 as the sample standard deviation of [999,999 1,000,000 1,000,001]. The correct answer (1) will be returned by the models (e.g., HP-17B/BII, HP-27S) that retain all the entered data, then calculate standard deviation from the mean. The TI-55II undoubtedly lacked the RAM to store all statistical input data. -- KS

 Re: BugsMessage #9 Posted by Walter B on 13 Sept 2007, 2:33 a.m.,in response to message #8 by Karl Schneider Hi, Palmer, thanks for the overview! Karl, there are at least 3 ways to do this statistics job: 1. The calc stores all the input data. 2. The calc stores the necessary sums in the summation register. 3. The calc has to save memory and uses the approach Palmer sketched for the TI-55II. Way 3 is the worst due to round off errors. Fully agree with you on this. I do not remember any calc using this way for decades. Extremely expensive memory was in the very starting years of scientific calcs only. Way 2 is just fine if used with a tiny amount of thought. The fact that you run into problems with small deviations on top of big numbers is well known, and I have the feeling it was even mentioned in an early HP calc manual. Anyway, for sheer laziness, emmh economy, no reasonable person will key in 7 digits repeatedly where 2 are sufficient to do the job. Way 1 replaces the missing brains of the user by additional memory in the calc. It became affordable when memory costs dropped.

 Re: BugsMessage #10 Posted by Palmer O. Hanson, Jr. on 13 Sept 2007, 9:01 a.m.,in response to message #9 by Walter B The real advantage of Way 1 is that it allows the user to calculate residuals instead of only calculating correlation coefficients. That allows the user to identify individual bad data points and erroneously entered data points. Before there were hand calculators I used a program like that on a Honeywell time share network. My supervisor had a brand new shiny HP-45 and suggested that I could use it at my desk instead going to the terminal of the time share network. He showed me how to enter data and I tried to run a problem. When the results didn't look right I asked him how I could review the input data. He told me that I couldn't. That was the last time I used an HP-45 for statistics. An example of the problems with Way 2 appeared in the article "Hard Wired Functions" in the March/April 1981 issue of PPX Exchange which was based on my submission. I had been exposed to the idea much earlier in a 1949 class in curve fitting by Professor Eggers at the University of Minnesota. When I stumbled on the problem with the statistics routine in the TI-55II I had no idea as to what might be going on. I shared the observation with George Thomson who was one of the frequent contributors to TI PPC Notes. He immediately recognized the source of the problem and told me that the methodology was common back in the pre-war era.

 Re: BugsMessage #11 Posted by Walter B on 13 Sept 2007, 2:48 p.m.,in response to message #10 by Palmer O. Hanson, Jr. Palmer, you are of course perfectly right. I'd just not run a full fledge ANOVA on a pocket calc. Any stat data I use such a calc for are small amounts (let's say up to 30 points maximum), and I tend to look at the data before keying them in. For bigger data sets, nowadays PCs offer far better tools, starting with scatter diagrams ;)

 Re: BugsMessage #12 Posted by Paul Dale on 13 Sept 2007, 4:37 p.m.,in response to message #9 by Walter B I had always been under the impression that keeping a running mean and variance (or std dev) instead of the sums or was done for numerical stability. More arithmetic operations are required to use this approach (thus it is slower) but it isn't as prone to overflow and it doesn't require any additional storage. A simple two point example where numbers are stored with two significant figures: ```step datum ----- sigmaX ----- ------- mean --------- 2 sig digits true 2 sig digits true 1 10 10 10 10 10 2 0.8 11 10.8 5.5 5.4 ``` I hope that is clear enough, I'm averaging two values 10 and 0.8 and showing the true values to infinite digits and the rounded versions. By maintaining the mean instead of the sum we get: ```step datum running mean 1 10 10 2 0.8 10 * (1 / 2) + 0.8 / 2 = 5.4 ``` The update formula is: ``` mn = mn-1 * ((n-1) / n) + xn / n ``` The overflow/loss of precision is avoided. A similar (more complicated) formula can be derived for the variance too. I've been thinking about how to implement the usual statistics functions and am torn between the above approach and storing double or longer length internal summations... - Pauli

Go back to the main exhibit hall