The Museum of HP Calculators

HP Forum Archive 09

 Re: HP-35 DocumentationMessage #2 Posted by Bill Wiese on 10 Dec 2002, 11:52 p.m.,in response to message #1 by Fred Lusk (CA) Fred... Your "100-99.99-0.01" example shows exactly why HP and other calculators use BCD floating-point math and not binary floating-point math. It is also why many financial software packages use BCD math, and why BCD math libraries are available for some compilers (and BCD support is built into COBOL, for example). Negative powers of 10 (.1, .01, .001...) are not exactly representable as a finite series of binary digits and are in fact the binary equivalent of 'repeating decimals' (like 1/3=0.33333333333...) Since Excel is dealing with numbers and not symbols (not knowing this is really 0.0) there are some slight errors in the least significant bit in the binary floating point expression of 99.99 and 0.01. (100 can be expressed exactly.) The odd 5.11x10^-15 answer is just showing these numbers weren't expressed exactly (because they CAN'T be). In some cases rounding strategies in midst of calculation can be used to avoid this. The areas of concern for accuracy for HP calcs wouldn't've been algorithms for +,-,* or / functions - these are known and predictable, and unless there is a bug they work to their specified limits (with some behavior determined by rounding and truncation strategies). Instead they are really interested in performance on log/trig functions and the behavior of the CORDIC-derived algorithms. (You might remember that on the HP41C, new functions LN(1+x) and e^(1+x) were offered because these allowed small values near 1 to have log/antilog performed on them more accurately since a different series approach was used.) Behavior for small and large argument values has to be tested. With rare exceptions (like the Pentium FDIV bug) hardware & related microcode runs fine; it's just that the math runs out at extreme ranges, so characterization needs to be done. Floating-point units on big CPUs sometimes use special polynomial approximations (Chebyshev, rational expressions, etc) for speed (since CORDIC is fundamentally serial in nature). The function is approximated over a very small range and transformations are performed on argument & output so that the algorithm is always running in a range for which it's optimized. Care is taken so that pre- processing argument or postprocessing result for this algorithm does not add additional error. One goal for many designs in fact is that an N-bit wide result should be achieved with N-bit (or N+1 bit) wide calculation (instead of say N+5 - so error modelling of internal processes is very important. If really interested, you might wanna check "Software Manual for the Elementary Functions" by Cody & Waite and "Elementary Functions" by J.M Muller to get a bit of info on such concerns. Bill Wiese San Jose, CA

 Re: HP-35 DocumentationMessage #3 Posted by Fred Lusk (CA) on 11 Dec 2002, 4:40 p.m.,in response to message #2 by Bill Wiese Bill… Thanks for the detailed explanation. I already knew a few parts of what you wrote, but certainly not all. For instance, I knew about computers using binary math. But, since I'm not a computer science guy, what is the "BCD" in BCD floating-point math. Also, what is "CORDIC"? I remember that the "Error Analysis" document that I am trying to find included a discussion of the reasonableness of the result (including number of decimal places as I mentioned in my original post) and I think also a discussion of how they implemented the rules. Do you know if HP implement the rules by *using* BCD floating-point math, or did they implement them *after* using BCD math to generate a "preliminary" result? Now I'll rant for a second, but not at you! I see no exucse for Intel, et al, not to have implemented the reasonableness rules, regardless of the method used to calculate the result. And given the failure of the chip makers to deal with this problem, I see again no excuse for the programmers of math-focused software to have ignored this. In a high-level computer language, it is trivial to count the number of decimal places in the inputs, then use the rules to govern the number of decimal places in the result (in fact, I can do this on an HP-41CX even though it is not necessary on an HP). I suspect it is also trivial at the machine level. Because I know about this problem, I know how to work around it by making use of the appropriate rounding functions in Excel and Mathcad. Unfortunately, I am certain that the vast majority of spreadsheet users do not know about this problem, and no doubt some of them are unaware that their tests for zero, etc. may be returning incorrect results. Fred

 Re: HP-35 DocumentationMessage #4 Posted by David Smith on 11 Dec 2002, 6:09 p.m.,in response to message #3 by Fred Lusk (CA) BCD is binary coded decimal. Basically each decimal digit (0-9) is encoded into 4 bit chunks. The arithmetic involves unpacking each digit one at a time, doing the arithmetic on the digit and repacking it into the result. It is very slow and inefficent in terms of memory usage. It's main advantage is it works like "human" arithmetic and gives answers like a pencil and paper calculation would. As far is Intel's use of floating point arithmetic, this is a standard world wide. Computer science courses are taught on the subject. Any CS student learns about floating point roundoffs and approximations. (try 1 / (1/7) on any calculator). It is just part of life with computers. Some spreadsheet programs have a flag you can set or a numeric format for forcing BCD operations. Some do it when you specify a field as "currency" for example.

 Re: HP-35 DocumentationMessage #5 Posted by GE (France) on 12 Dec 2002, 4:18 a.m.,in response to message #3 by Fred Lusk (CA) Regarding your rant, there is one small point you didn't fully notice. Say you give your number cruncher the value 1.4 and you want that it divides it by 3. We are in trouble. The problem is that the machine doesn't know if 1.4 is "one point four with one known decimal place" OR -for instance- "one point four zero zero zero zero with five known decimal places". To correct this, one should give every number as a couple (value, precision) and we know this is not what happens nowadays. So the machine conducts its calculations to its full precision, and the responsibility to sort things out falls back on the user... Note that I would'nt like to use a calculator which would assume that zeroes are "unknown values". Not a perfect world...

 Re: HP-35 DocumentationMessage #6 Posted by Fred Lusk (CA) on 13 Dec 2002, 1:47 a.m.,in response to message #5 by GE (France) GE… Thanks for your comments. Regarding your example, division is a tougher nut to crack, and I'm not too concerned with division anyway because some division results go on forever! However, for addition, subtraction, and multiplication it is a trivial matter to determine the correct number of decimal places for a result by looking at the inputs. HP correctly handled this in the HP-35 three decades ago (they documented this in the internal document I am trying to find). For me at least, it turns out that most of the conditional testing I do for zero is related to addition and subtraction. I know how to deal with these spurrious digits, it just annoys me!!! I know there is an issue with handling known trailing zeros as significant digits. As far as I know, none of the software or hardware I use actually preserves and uses input trailing zeros properly. Excel truncates them for inputs so they just disappear. Mathcad, on the other hand, keeps them for inputs, but they don't affect the precision of the result. For what I do (civil engineering), the loss of trailing zeros is less of an issue to me than a bad digit in the 15th decimal place turning zero into not zero. Thus, the way HP calculators handle these issues works very well for me. Why can't my \$2000+ PC? Acutally, you bring up a good point. I wouldn't mind seeing a significant digit mode on the next RPN calculator from HP (???) to along with FIX/SCI/ENG.

 Re: HP-35 DocumentationMessage #7 Posted by Vassilis Prevelakis on 12 Dec 2002, 8:00 a.m.,in response to message #1 by Fred Lusk (CA) The HP Journal Article is available in Jake Schwartz's CDROM set. **vp

 Re: HP-35 DocumentationMessage #8 Posted by Fred Lusk (CA) on 13 Dec 2002, 1:50 a.m.,in response to message #7 by Vassilis Prevelakis Vassilis… Thanks for the link. I also got a couple of e-mails to a link on the HP web site (they have a section devoted to history and the HP-35 is one of the machines highlighted there). Here's the link: http://www.hp.com/hpinfo/abouthp/histnfacts/museum/personalsystems/0023/other/0023hpjournal03.pdf I'm still trying to find the internal HP document I mentioned in my first post. Maybe it will turn up somewhere. Fred

 Re: HP-35 DocumentationMessage #9 Posted by Dave Shaffer on 13 Dec 2002, 12:41 p.m.,in response to message #8 by Fred Lusk (CA) I learned on day two in Fortran class (almost 40 years ago!) that you never checked for zero from a floating point calculation. You shouldn't write IF( X .EQ. 0.0 ) THEN (do something) You should instead perform a test of the form IF( ABS(X-0.0) .LT. TOLERANCE) THEN (close enough to zero to do something) where TOLERANCE is the floating point amount by which you approximate zero. For most situations, its value was not particularly critical. I think this emphasizes what some others have said: it is not the job of the CPU to figure out what quality of results the user wants or needs, it is the responsibility of the user/programmer to code his/her requirements properly, including tests for precision, accuracy, and significant figures. Hence, I can not fault INTEL, IBM, etc. for implementing straightforward arithmetic with binary numbers of whatever length the processor handles. As also pointed out, there are many (most!) simple decimal numbers which can not be implemented exactly in binary form.

 The chip makers "corrected it" years agoMessage #11 Posted by Dave Hicks on 15 Dec 2002, 2:01 a.m.,in response to message #10 by Fred Lusk " This is a real problem and it would be very easy for the chip makers to correct it." Every Intel Architecture CPU from the 8086 to the Pentium 4 has/had instructions that support BCD math. Hardly anyone uses them. Overall though, if I needed super accurate calculations I'd trust 64 bit IEEE floating point over 10 digit BCD. The "test for reasonableness" I use is the minimum significant figures in my measurements which are always well below what any calculator or computer can handle.

 Re: The chip makers "corrected it" years agoMessage #13 Posted by Dave Hicks on 15 Dec 2002, 4:02 p.m.,in response to message #12 by Fred Lusk (CA) "The spreadsheet dealt only with dollars and cents and used only addition, subtraction, multiplication, and division." Finance is the area where you really want to use BCD. Bankers care about pennies on Billions. For scientists that's usually noise beyond their ability to measure (let alone care.) I'm surprised that common spreadsheets like Excel don't offer BCD math as an option. The last time I worked in BCD on a computer was using PL/I on an Amdahl mainframe in the 70s. I'm really not sure why BCD seems to have become so "de-emphasized" over the years. (Maybe it just seems that way since I don't write finance software.) Here's a package you might be interested in. "Even with floating point math, it should be very easy to count the number of decimal places going in so the result can be reported properly coming out." That's an interesting idea and you should probably take it to the IEEE. Since computers get data from files, networks etc. you'll want a standard to represent the significant figures. The resulting accuracy will also have to be "readable" by the software so they don't display "4.0000" when they mean "4.00". I think you'll still have a user training problem though as most users don't expect 1/3 to result in 0 and are likely to moan and enter 1.000000000/3.000000000 just to force the computer to give the "right" answer as they see it. The most common complaint I see from users is that the answer is not "correct" to some huge number of digits regardless of the number of digits they entered or the accuracy that their inputs really have. " On another note: this is one of my two favorite hangouts on the Internet (the other being Astromart) and I appreciate what you have done for the HP community here. Happy Holidays." Thank you! Happy holidays to you too.

 Re: The chip makers "corrected it" years agoMessage #14 Posted by Ernie Malaga on 15 Dec 2002, 6:43 p.m.,in response to message #13 by Dave Hicks >The last time I worked in BCD on a computer was using PL/I on an Amdahl mainframe in the 70s. I'm really not sure why BCD seems to have become so "de-emphasized" over the years. (Maybe it just seems that way since I don't write finance software.) From 1983 to 2000 I worked on IBM midrange systems -- namely, the System/36 and the AS/400. These two computers (as well as several others made by IBM) support BCD and, in fact, use it all the time. When I worked on the System/36, writing programs in RPG II, there was no such thing as floating point; every numeric variable was decimal and had a fixed number of decimal places. I believe there's no point arguing about which is better. Like a hammer and a screwdriver, they are tools with different applications, and one should not use one instead of the other. But it sure is nice to get 0.00 instead of 7.21E-18 or whatever. -Ernie

 What languages?Message #15 Posted by Dave Hicks on 17 Dec 2002, 1:29 a.m.,in response to message #14 by Ernie Malaga On PCs, the hardware has support but it seems that the common compilers don't support BCD natively. IBM's Visual Age PL/I supports it but at about \$3K per copy, it's not so common.

 Re: What languages?Message #16 Posted by Ernie Malaga on 17 Dec 2002, 5:25 a.m.,in response to message #15 by Dave Hicks >On PCs, the hardware has support but it seems that the common compilers don't support BCD natively. IBM's Visual Age PL/I supports it but at about \$3K per copy, it's not so common. The AS/400 architecture was optimized for packed-decimal numeric storage and crunching. Languages such as RPG IV, COBOL, and CL use packed-decimal numeric variables by default. I'm not positive this is also true for PL/I, C, and FORTRAN (on the same machine). (Packed decimal format is the numeric format you get in PL/I when you declare a variable as FIXED DECIMAL(m, n).) -Ernie

 Re: What languages?Message #17 Posted by Britt Snodgrass on 17 Dec 2002, 12:59 p.m.,in response to message #16 by Ernie Malaga Ada 95 provides decimal fixed point types. See http://www.adaic.org/standards/95lrm/html/RM-3-5-9.html#I1703 It is also easy to implement a BCD abstract data type in Ada. GNAT 3.15p (GNU Ada 95) is an excellent GPL open-source compiler and is a available as a free download. Britt

 Thanks for the hints everyoneMessage #18 Posted by Dave Hicks on 18 Dec 2002, 1:41 a.m.,in response to message #17 by Britt Snodgrass I think it's time to do a little compiler collecting. You can never have too many compilers.

 Re: Thanks for the hints everyoneMessage #19 Posted by Britt Snodgrass on 18 Dec 2002, 1:51 p.m.,in response to message #18 by Dave Hicks GNAT is available at ftp://ftp.cs.nyu.edu/pub/gnat/3.15p and ftp://ftp.cs.nyu.edu/pub/gnat/3.15p/winnt/. I just noticed these aren't at the top of a Google search list. Google seems to be less helpful lately :(

 Re: Thanks for the hints everyoneMessage #20 Posted by Les Bell [Sydney] on 18 Dec 2002, 4:48 p.m.,in response to message #18 by Dave Hicks Don't forget Digital Research's PL/I Subset G compiler. There were versions for CP/M-80, CP/M-86 and MS-DOS, which came with terrific documentation - actually a really good textbook on structured programming. For general PL/I resources, start here: http://www.users.bigpond.com/robin_v/resource.htm For DRI PL/I-86 for DOS, see http://www.retroarchive.org/cpm/archive/unofficial/binary.html I wrote a lot of code during the early eighties with the DRI compilers, as well as the Access Manager (really Faircom's c-Tree product) and Display Manager add-ons. Lovely stuff. . . Finally, decimal arithmetic was also a feature of various "Business" BASIC's, starting with Gordon Eubanks' CBASIC. Best, --- Les [http://www.lesbell.com.au]

 Re: What languages?Message #21 Posted by Massimo Gnerucci (Italy) on 17 Dec 2002, 7:44 a.m.,in response to message #15 by Dave Hicks I remember a BCD version of the good old Turbo Pascal for DOS; back in the mid '80s. Ooops, here it is: http://community.borland.com/article/0,1410,20792,00.html Massimo

 Re: Misunderstood PrecisionMessage #22 Posted by Paul Brogger on 16 Dec 2002, 12:01 p.m.,in response to message #13 by Dave Hicks . . . which brings to mind another example of my own naiveté: After using a slide rule for a while, I bought my HP-21. Shortly thereafter, I packed it up & shipped it to Corvallis, complaining about a "four parts per billion error in the SIN function" . . . I got it back shortly, with a note explaining the algorithm involved and claiming that "the unit operates according to specifications." . . . and I wonder what other simplistic notions are being formed by generations fed a diet of digital displays, video game virtual worlds, and half-hour sitcom issue-bites with their cute, tidy dénouements . . .

 Re: Misunderstood PrecisionMessage #23 Posted by Dave Hicks on 17 Dec 2002, 1:23 a.m.,in response to message #22 by Paul Brogger I remember when the "Pentium Bug" was discovered - on the HP 48GX. It was a similar level of error.

 Pentium Bug jokes!Message #24 Posted by Karl Schneider on 18 Dec 2002, 1:12 a.m.,in response to message #23 by Dave Hicks Wasn't the Pentium bug discovered around 1993? A Yahoo search revealed a site with compilations of the resulting jokes. Remarkably clever and witty!

 Here's another article I've always likedMessage #25 Posted by Dave Hicks on 18 Dec 2002, 1:51 a.m.,in response to message #24 by Karl Schneider Not about the flaw but...

 Re: Computer Humor Hall of FameMessage #26 Posted by Paul Brogger on 18 Dec 2002, 10:03 a.m.,in response to message #23 by Dave Hicks If I may nominate an article, one of my all-time favorites is this.

 Re: Computer Humor Hall of FameMessage #27 Posted by db(martinez,california) on 19 Dec 2002, 12:47 a.m.,in response to message #26 by Paul Brogger paul; thanks. i needed that.

 Re: The chip makers "corrected it" years agoMessage #28 Posted by Fred Lusk (CA) on 19 Dec 2002, 2:12 a.m.,in response to message #13 by Dave Hicks Dave… Thanks for your reply. The TurboPower looks interesting, but I'm not sure I actually have a use for it. I like your suggestion about taking this up with IEEE…but how, exactly, does one do this? Regarding your 1/3 example, I don't see a need to type in all those trailing zeros. From an arithmetic (rather than scientific/engineering) standpoint, 1 and 1.000000… are identical, so the trailing zeros add nothing (pun intended). On the other hand, it would be nice to be able to specify the desired precision by adding the appropriate number of decimal places. Fred

 Re: Why the extra zeroes . . .Message #29 Posted by Paul Brogger on 19 Dec 2002, 11:55 a.m.,in response to message #28 by Fred Lusk (CA) If I may chime in, I think Dave was suggesting that, with an "intelligent" compiler, adding the zeroes would force greater precision in the intermediate calculation, and in the internal representation of the result. Some systems decide, for example, to use a floating-point division algorithm rather than a (quicker) integer divide, if a "." is coded after a literal dividend or divisor. For example, "1./3" (or "1./X") might force a floating-point algorithm and result, whereas something like "1/3" might be compiled as an integer divide. So, in such a context, the decimal point (and, perhaps, the extra zeroes) DO add "something" -- increased precision. (That was a nice pun, though!)

 I thought this was what Fred was proposingMessage #30 Posted by Dave Hicks on 19 Dec 2002, 1:06 p.m.,in response to message #29 by Paul Brogger ie: "Even with floating point math, it should be very easy to count the number of decimal places going in so the result can be reported properly coming out. " Now if "1 and 1.000000… are identical" then I don't see what is being proposed, unless 1.00000/3.00000 should still be zero? Would a user have to "add something" like 1.00001/3 in order to get an answer closer to .33...? I'm probably misunderstanding the idea.

 Re: I thought this was what Fred was proposingMessage #31 Posted by Michael F. Coyle on 19 Dec 2002, 11:28 p.m.,in response to message #30 by Dave Hicks No, you wouldn't do 1.000001/3; the idea (if *I'm* understanding it correctly is that: x = 1.00/3.00 would give 0.33 y = 1.0000/3.0000 would give 0.3333 etc. Note that 0.33 and 0.3333 probably don't have exact binary representations, so I'm not sure how well this would work in practice. Also note that the test "if x=y" would fail, even though both variables "ought to be one-third". And what result should we give for 1.00000000/3.0 ? (I had written a diatribe about how PL/I tries to do it all wrt arithmetic declarations and IMHO, sometimes causes more problems than it solves. But I've mecifully omitted it here.)

 Re: I thought this was what Fred was proposingMessage #32 Posted by Dave Shaffer on 20 Dec 2002, 10:57 a.m.,in response to message #31 by Michael F. Coyle re "And what result should we give for 1.00000000/3.0 ?" That one's easy - if you follow the rules for significant figures: it's 0.33 (2 significant figures). I berate my physics students on this subject all semester long - some of them get it, some of them don't. As to the difference between expressions like 1.00/3.00 and 1.00000/3.00000 in an IF test - that's why numerical IF tests should be tests within some tolerance rather than "IS X = Y?" (Didn't my comment like that occur w-a-a-a-a-y back in this thread!!??!!)

 Re: Precision in "IF" testsMessage #33 Posted by Paul Brogger on 20 Dec 2002, 2:04 p.m.,in response to message #32 by Dave Shaffer Right. I suppose one could implement this "precision-aware compare" as a subroutine or macro that first renders the two numerical arguments to the specified precision, and then compares those temporary results . . . And, while we're at it, shouldn't a string compare take into account the languages of the values compared? (E.g., "IF 'rojo'(spn) = 'red'(eng) THEN . . ." 8-)

 Significant FiguresMessage #34 Posted by Michael F. Coyle on 20 Dec 2002, 2:32 p.m.,in response to message #32 by Dave Shaffer Mea culpa. You're right of course about 1.00000000/3.0 = 0.33.

 Re: HP-35 DocumentationMessage #35 Posted by Jim Cathey on 19 Dec 2002, 8:43 p.m.,in response to message #10 by Fred Lusk The problem is that computers generally work in binary floating-point, not decimal fixed-point. They do this because of speed. Decimal math is slower, and all hardware optimizations have been done on binary math only. The inaccuracies you complain of generally come in in the decimal->binary->decimal conversion process for non-integers, combined with the nature of floating point math. It is criminal that spreadsheets don't offer a BCD math provision, but there it is. It's not the computer's fault, it is the programmer's. While I was long aware of this, a number of years back I worked at a company that supplied computers to banks. Our prior product line was Z-80 assembler based, and used a BCD library to do all its financial math. No problem. The new line used C on a 68K, and as C had floating point, they attempted to use it for all math. What a disaster! I gave a number of lessons to some programmers at that time. My analogy was that using floating point math for financial calculations was like finding out how much money was in your wallet using calipers. Floating point (and analog measurement techniques) have an inherent plus/minus characteristic that's pretty unwelcome in financial calculations. I think the final solution for the innards of that software was to use floating-point math on pennies or mils (as integers), with a couple of BCD routines to handle the troublesome stuff. This was a compromise that kept most of the speed of FP, without the mistakes. BTW, these problems were detected during internal development, and weren't ever shipped to customers.

Go back to the main exhibit hall