The Museum of HP Calculators

HP Forum Archive 09

[ Return to Index | Top of Index ]

HP-35 Documentation
Message #1 Posted by Fred Lusk (CA) on 10 Dec 2002, 10:11 p.m.

Greetings…this is long, so please bear with me…

Does anyone out there in MoHPC-land happen to have a copy of an old internal HP document titled (I think) "Error Analysis for the HP-35"??? If so, I would like to get a copy of it - scanned or on paper. And I'm sure the Museum would like a copy too. As I remember, the document covers the accuracy of each mathematical function down to the last decimal place, and shows the results with numerous graphs. This document is probably the basis for the old HP Journal article "Algorithms and Accuracy in the HP-35." [Dave Hicks - BTW, I tried the link to that article and it appears that HP no longer has it available on their web site.]

Here's the background story to my request: about January 1973 (when I was in junior high school) my father's long-awaited HP-35 finally arrived. He was teaching high school chemistry at the time. Because of his math and engineering background, he was interested in how the little beast did its internal calculations. He contacted the local HP rep (they used to have an office here) for information and was rewarded with the document I am asking about. He recently gave me the calculator (no batteries, but still works on AC) and I asked him if he still had the "Error Analysis." He has searched and can't find it. I would like it both for my collection and for a short article I plan to write for my company's internal quality newsletter.

The part that most interests me for my article is the part of the "Error Analysis" that deals with HP's implementation of the "Tests for Reasonableness," which are the simple rules for determining how many decimal places belong in an answer. Thus, for addition and subtraction, the result cannot have more decimal places than the input number with the most decimal places, and for multiplication, the result cannot have more decimal places that the sum of the number of decimal places of the input numbers.

I'll give y'all a hint of what my article will be about because it may save you some grief in using a personal computer for calculations. I won't bore you yet with how I found this (I've bored you enough as it is), but try this in Excel: enter the equation =100-99.99-0.01 into a cell and hit Enter. Instead of ZERO, you should see 5.1156995306556E-15 (if you don't see it, format the cell to General and widen the column). As you can imagine, this can ruin a perfectly good test for ZERO, etc. I have a whole list of similar equations: most produce ZERO, but many don't…shades of the Pentium bug. I have tried this on everything from a 286 on up and in many different software packages, of which only Q&A (the old DOS database) and Word for Windows 1 got the right answer. Q&A had an advantage because it was limited to 7 or 8 decimal places. I even called Mathcad tech support several years ago and asked why they didn't implement the "Tests for Reasonableness" since Intel didn't see fit to do it. I mentioned that HP has been doing it for years. Their recommendation was to use the various rounding functions to fix the problem. I knew that already.

I've done enough damage, so I'll quit now. Thanks for hanging on.

      
Re: HP-35 Documentation
Message #2 Posted by Bill Wiese on 10 Dec 2002, 11:52 p.m.,
in response to message #1 by Fred Lusk (CA)

Fred...

Your "100-99.99-0.01" example shows exactly why HP and other calculators use BCD floating-point math and not binary floating-point math. It is also why many financial software packages use BCD math, and why BCD math libraries are available for some compilers (and BCD support is built into COBOL, for example).

Negative powers of 10 (.1, .01, .001...) are not exactly representable as a finite series of binary digits and are in fact the binary equivalent of 'repeating decimals' (like 1/3=0.33333333333...) Since Excel is dealing with numbers and not symbols (not knowing this is really 0.0) there are some slight errors in the least significant bit in the binary floating point expression of 99.99 and 0.01. (100 can be expressed exactly.) The odd 5.11x10^-15 answer is just showing these numbers weren't expressed exactly (because they CAN'T be). In some cases rounding strategies in midst of calculation can be used to avoid this.

The areas of concern for accuracy for HP calcs wouldn't've been algorithms for +,-,* or / functions - these are known and predictable, and unless there is a bug they work to their specified limits (with some behavior determined by rounding and truncation strategies). Instead they are really interested in performance on log/trig functions and the behavior of the CORDIC-derived algorithms. (You might remember that on the HP41C, new functions LN(1+x) and e^(1+x) were offered because these allowed small values near 1 to have log/antilog performed on them more accurately since a different series approach was used.) Behavior for small and large argument values has to be tested.

With rare exceptions (like the Pentium FDIV bug) hardware & related microcode runs fine; it's just that the math runs out at extreme ranges, so characterization needs to be done.

Floating-point units on big CPUs sometimes use special polynomial approximations (Chebyshev, rational expressions, etc) for speed (since CORDIC is fundamentally serial in nature). The function is approximated over a very small range and transformations are performed on argument & output so that the algorithm is always running in a range for which it's optimized. Care is taken so that pre- processing argument or postprocessing result for this algorithm does not add additional error. One goal for many designs in fact is that an N-bit wide result should be achieved with N-bit (or N+1 bit) wide calculation (instead of say N+5 - so error modelling of internal processes is very important.

If really interested, you might wanna check "Software Manual for the Elementary Functions" by Cody & Waite and "Elementary Functions" by J.M Muller to get a bit of info on such concerns.

Bill Wiese San Jose, CA

            
Re: HP-35 Documentation
Message #3 Posted by Fred Lusk (CA) on 11 Dec 2002, 4:40 p.m.,
in response to message #2 by Bill Wiese

Bill…

Thanks for the detailed explanation. I already knew a few parts of what you wrote, but certainly not all. For instance, I knew about computers using binary math. But, since I'm not a computer science guy, what is the "BCD" in BCD floating-point math. Also, what is "CORDIC"?

I remember that the "Error Analysis" document that I am trying to find included a discussion of the reasonableness of the result (including number of decimal places as I mentioned in my original post) and I think also a discussion of how they implemented the rules. Do you know if HP implement the rules by *using* BCD floating-point math, or did they implement them *after* using BCD math to generate a "preliminary" result?

Now I'll rant for a second, but not at you! I see no exucse for Intel, et al, not to have implemented the reasonableness rules, regardless of the method used to calculate the result. And given the failure of the chip makers to deal with this problem, I see again no excuse for the programmers of math-focused software to have ignored this. In a high-level computer language, it is trivial to count the number of decimal places in the inputs, then use the rules to govern the number of decimal places in the result (in fact, I can do this on an HP-41CX even though it is not necessary on an HP). I suspect it is also trivial at the machine level. Because I know about this problem, I know how to work around it by making use of the appropriate rounding functions in Excel and Mathcad. Unfortunately, I am certain that the vast majority of spreadsheet users do not know about this problem, and no doubt some of them are unaware that their tests for zero, etc. may be returning incorrect results.

Fred

                  
Re: HP-35 Documentation
Message #4 Posted by David Smith on 11 Dec 2002, 6:09 p.m.,
in response to message #3 by Fred Lusk (CA)

BCD is binary coded decimal. Basically each decimal digit (0-9) is encoded into 4 bit chunks. The arithmetic involves unpacking each digit one at a time, doing the arithmetic on the digit and repacking it into the result. It is very slow and inefficent in terms of memory usage. It's main advantage is it works like "human" arithmetic and gives answers like a pencil and paper calculation would.

As far is Intel's use of floating point arithmetic, this is a standard world wide. Computer science courses are taught on the subject. Any CS student learns about floating point roundoffs and approximations. (try 1 / (1/7) on any calculator). It is just part of life with computers. Some spreadsheet programs have a flag you can set or a numeric format for forcing BCD operations. Some do it when you specify a field as "currency" for example.

                  
Re: HP-35 Documentation
Message #5 Posted by GE (France) on 12 Dec 2002, 4:18 a.m.,
in response to message #3 by Fred Lusk (CA)

Regarding your rant, there is one small point you didn't fully notice.

Say you give your number cruncher the value 1.4 and you want that it divides it by 3. We are in trouble.

The problem is that the machine doesn't know if 1.4 is "one point four with one known decimal place" OR -for instance- "one point four zero zero zero zero with five known decimal places". To correct this, one should give every number as a couple (value, precision) and we know this is not what happens nowadays.

So the machine conducts its calculations to its full precision, and the responsibility to sort things out falls back on the user...

Note that I would'nt like to use a calculator which would assume that zeroes are "unknown values".

Not a perfect world...

                        
Re: HP-35 Documentation
Message #6 Posted by Fred Lusk (CA) on 13 Dec 2002, 1:47 a.m.,
in response to message #5 by GE (France)

GE…

Thanks for your comments. Regarding your example, division is a tougher nut to crack, and I'm not too concerned with division anyway because some division results go on forever! However, for addition, subtraction, and multiplication it is a trivial matter to determine the correct number of decimal places for a result by looking at the inputs. HP correctly handled this in the HP-35 three decades ago (they documented this in the internal document I am trying to find). For me at least, it turns out that most of the conditional testing I do for zero is related to addition and subtraction. I know how to deal with these spurrious digits, it just annoys me!!!

I know there is an issue with handling known trailing zeros as significant digits. As far as I know, none of the software or hardware I use actually preserves and uses input trailing zeros properly. Excel truncates them for inputs so they just disappear. Mathcad, on the other hand, keeps them for inputs, but they don't affect the precision of the result. For what I do (civil engineering), the loss of trailing zeros is less of an issue to me than a bad digit in the 15th decimal place turning zero into not zero. Thus, the way HP calculators handle these issues works very well for me. Why can't my $2000+ PC? Acutally, you bring up a good point. I wouldn't mind seeing a significant digit mode on the next RPN calculator from HP (???) to along with FIX/SCI/ENG.

      
Re: HP-35 Documentation
Message #7 Posted by Vassilis Prevelakis on 12 Dec 2002, 8:00 a.m.,
in response to message #1 by Fred Lusk (CA)

The HP Journal Article is available in Jake Schwartz's CDROM set.

http://www.magpage.com/~jakes

**vp

            
Re: HP-35 Documentation
Message #8 Posted by Fred Lusk (CA) on 13 Dec 2002, 1:50 a.m.,
in response to message #7 by Vassilis Prevelakis

Vassilis…

Thanks for the link. I also got a couple of e-mails to a link on the HP web site (they have a section devoted to history and the HP-35 is one of the machines highlighted there). Here's the link:

http://www.hp.com/hpinfo/abouthp/histnfacts/museum/personalsystems/0023/other/0023hpjournal03.pdf

I'm still trying to find the internal HP document I mentioned in my first post. Maybe it will turn up somewhere.

Fred

                  
Re: HP-35 Documentation
Message #9 Posted by Dave Shaffer on 13 Dec 2002, 12:41 p.m.,
in response to message #8 by Fred Lusk (CA)

I learned on day two in Fortran class (almost 40 years ago!) that you never checked for zero from a floating point calculation. You shouldn't write

IF( X .EQ. 0.0 ) THEN (do something)

You should instead perform a test of the form

IF( ABS(X-0.0) .LT. TOLERANCE) THEN (close enough to zero to do something)

where TOLERANCE is the floating point amount by which you approximate zero. For most situations, its value was not particularly critical.

I think this emphasizes what some others have said: it is not the job of the CPU to figure out what quality of results the user wants or needs, it is the responsibility of the user/programmer to code his/her requirements properly, including tests for precision, accuracy, and significant figures. Hence, I can not fault INTEL, IBM, etc. for implementing straightforward arithmetic with binary numbers of whatever length the processor handles. As also pointed out, there are many (most!) simple decimal numbers which can not be implemented exactly in binary form.

                        
Re: HP-35 Documentation
Message #10 Posted by Fred Lusk on 14 Dec 2002, 10:28 p.m.,
in response to message #9 by Dave Shaffer

Thanks for your comments Dave. I must, however, respecfully disagree. When you (and I) took FORTRAN, we would have been classed as knowledgeable or even sophisticated computer users. I remember a similar "warning" to the one you learned. As a programmer, you are working with the computer at a more fundamental level and you are supposed be aware of these types of things, as you correctly pointed out.

On the other hand, the average computer user today is unaware that spurrious digits even exist that could ruin a calculation. The expectatation is that results will be correct…witness the typical reaction several years ago when the "Pentium Bug" hit the big time. For most of us, the calculation error was of no importance, but Intel replaced a lot a chips (including mine). Of course, that was an algorithm accuracy issue and mine is not. My issue is actually more fundamental.

I must also take issue with your statement that "it is not the job of the CPU to figure out what quality of results the user wants or needs." I'm not asking the CPU to guess anything. I'm asking the CPU to give me the correct answer, and it's not. The job of the CPU is to return the correct answer within the range it is capable of…that's the quality of result I expect. And regardless of the level of precision the CPU is capable of, for addition, subtraction, and multiplication (and some division), the spurrious digits create an inaccurate result, even if it is in the 15th place. 100.00-99.99-0.01 is exactly 0 (arithmetically), not 5x10^-15, and the CPU can be designed to return this correct answer. Plus, the inability of the CPU to do this has real consequences. Besides, if quality of result is not important, why do we waste our time calculating so many decimal places? We could just as well go back to our slide rules. But of course we can't. We need computer precision for much of what we do today.

This is a real problem and it would be very easy for the chip makers to correct it. I don't buy the argument that just because a calculation is done using floating point techniques that implementing the simple rules I mentioned previously couldn't or shouldn't be done. HP took care of it because they knew the user expected the correct number of decimal places in the answer. Imagine the uproar if the HP-35 would have returned the same type answer my PC does. Condfidence in the machine would have evaporated immediately. Computer users should expect no less.

Fred

                              
The chip makers "corrected it" years ago
Message #11 Posted by Dave Hicks on 15 Dec 2002, 2:01 a.m.,
in response to message #10 by Fred Lusk

" This is a real problem and it would be very easy for the chip makers to correct it."

Every Intel Architecture CPU from the 8086 to the Pentium 4 has/had instructions that support BCD math. Hardly anyone uses them.

Overall though, if I needed super accurate calculations I'd trust 64 bit IEEE floating point over 10 digit BCD.

The "test for reasonableness" I use is the minimum significant figures in my measurements which are always well below what any calculator or computer can handle.

                                    
Re: The chip makers "corrected it" years ago
Message #12 Posted by Fred Lusk (CA) on 15 Dec 2002, 3:00 p.m.,
in response to message #11 by Dave Hicks

Dave…

Thanks for your reply. My argument is not BCD vs floating point math. Frankly, as a user, I am more interested in an accurate and precise result than how the machine actually comes up with it. A CPU housing a fast-fingered chimp with an HP-42S would be enough for me!

My argument is that regardless of the method used to make a calculation, the CPU has not finished its job if it returns 5x10^-15 for zero. I am looking at this from the standpoint of the user, not computer science. The user has the right to expect the correct result, and with vast number of unsophisticated computer users out there, it is even more important. Even with floating point math, it should be very easy to count the number of decimal places going in so the result can be reported properly coming out. Floating point or not, we can't deny that a subtraction problem done with numbers having only two decimal places must return a result with only two decimal places. That's the reality of arithmetic. Even though numbers in the scientific/engineering world often have a tolerance, most of us properly use numbers as exact in the arithmetical sense, not with a +/- after them. I would rather the computer return the arithmetically correct result and let me worry about the tolerances.

I first ran across this problem in a practical way about twelve years ago. Back in the days when this was all my 386 could handle, I built a 1.8 MB spreadsheet to calculate an assessment spread (this is where we take the cost of a construction project and prorate it to the benefiting lots-650 in this case-in proportion to the benefit). The spreadsheet dealt only with dollars and cents and used only addition, subtraction, multiplication, and division. Although some of the division problems did result in fractional pennies, all critical amounts were rounded to the nearest penny. Part of this spreadsheet took data on cash payments from the property owners collected by the County and compared it to my own calculation. I set up an error-check column to ensure that the County hadn't messed up the Q&A data base I made for them. I hadn't included any round-offs here because all three numbers that went into each error-check calculation had been previously rounded to two decimal places or had been directly input to two decimal places. The result of all 650 calculations was supposed to be zero. Imagine my surprise when one of them wasn't. I had long since forgotten the finer points of floating point calculations. As a civil engineer, it is not something I deal with every day. In any event, it took a few minutes for me to find the one offending calculation and several hours to determine that nothing bad had actually happened to either my spreadsheet or the data from the County.

Over the years I have tested many similar equations to the one I posted. Most produce the correct results, but many do produce the spurrious digits. I have also tested this on many of the programs I have used. The only two that ever got the correct answer was Q&A 4.0 (which was programmed to use only 7 or 8 decimal places) and Word for Windows 1.0.

In spite of the protests of those who know a lot more about computers than I do (and I know a fair amount for not being in the field), I still maintain that this spurrious digits problem is a real problem, an easily fixable problem, and the responsibility of the chip makers and/or the programmers. My first vote, though, is that this should be corrected at the CPU level. It should never be an issue to the user.

On another note: this is one of my two favorite hangouts on the Internet (the other being Astromart) and I appreciate what you have done for the HP community here. Happy Holidays.

Fred

                                          
Re: The chip makers "corrected it" years ago
Message #13 Posted by Dave Hicks on 15 Dec 2002, 4:02 p.m.,
in response to message #12 by Fred Lusk (CA)

"The spreadsheet dealt only with dollars and cents and used only addition, subtraction, multiplication, and division."

Finance is the area where you really want to use BCD. Bankers care about pennies on Billions. For scientists that's usually noise beyond their ability to measure (let alone care.) I'm surprised that common spreadsheets like Excel don't offer BCD math as an option. The last time I worked in BCD on a computer was using PL/I on an Amdahl mainframe in the 70s. I'm really not sure why BCD seems to have become so "de-emphasized" over the years. (Maybe it just seems that way since I don't write finance software.)

Here's a package you might be interested in.

"Even with floating point math, it should be very easy to count the number of decimal places going in so the result can be reported properly coming out."

That's an interesting idea and you should probably take it to the IEEE. Since computers get data from files, networks etc. you'll want a standard to represent the significant figures. The resulting accuracy will also have to be "readable" by the software so they don't display "4.0000" when they mean "4.00".

I think you'll still have a user training problem though as most users don't expect 1/3 to result in 0 and are likely to moan and enter 1.000000000/3.000000000 just to force the computer to give the "right" answer as they see it. The most common complaint I see from users is that the answer is not "correct" to some huge number of digits regardless of the number of digits they entered or the accuracy that their inputs really have.

" On another note: this is one of my two favorite hangouts on the Internet (the other being Astromart) and I appreciate what you have done for the HP community here. Happy Holidays."

Thank you! Happy holidays to you too.

                                                
Re: The chip makers "corrected it" years ago
Message #14 Posted by Ernie Malaga on 15 Dec 2002, 6:43 p.m.,
in response to message #13 by Dave Hicks

>The last time I worked in BCD on a computer was using PL/I on an Amdahl mainframe in the 70s. I'm really not sure why BCD seems to have become so "de-emphasized" over the years. (Maybe it just seems that way since I don't write finance software.)

From 1983 to 2000 I worked on IBM midrange systems -- namely, the System/36 and the AS/400. These two computers (as well as several others made by IBM) support BCD and, in fact, use it all the time. When I worked on the System/36, writing programs in RPG II, there was no such thing as floating point; every numeric variable was decimal and had a fixed number of decimal places.

I believe there's no point arguing about which is better. Like a hammer and a screwdriver, they are tools with different applications, and one should not use one instead of the other.

But it sure is nice to get 0.00 instead of 7.21E-18 or whatever.

-Ernie

                                                      
What languages?
Message #15 Posted by Dave Hicks on 17 Dec 2002, 1:29 a.m.,
in response to message #14 by Ernie Malaga

On PCs, the hardware has support but it seems that the common compilers don't support BCD natively. IBM's Visual Age PL/I supports it but at about $3K per copy, it's not so common.

                                                            
Re: What languages?
Message #16 Posted by Ernie Malaga on 17 Dec 2002, 5:25 a.m.,
in response to message #15 by Dave Hicks

>On PCs, the hardware has support but it seems that the common compilers don't support BCD natively. IBM's Visual Age PL/I supports it but at about $3K per copy, it's not so common.

The AS/400 architecture was optimized for packed-decimal numeric storage and crunching. Languages such as RPG IV, COBOL, and CL use packed-decimal numeric variables by default. I'm not positive this is also true for PL/I, C, and FORTRAN (on the same machine).

(Packed decimal format is the numeric format you get in PL/I when you declare a variable as FIXED DECIMAL(m, n).)

-Ernie

                                                                  
Re: What languages?
Message #17 Posted by Britt Snodgrass on 17 Dec 2002, 12:59 p.m.,
in response to message #16 by Ernie Malaga

Ada 95 provides decimal fixed point types. See http://www.adaic.org/standards/95lrm/html/RM-3-5-9.html#I1703 It is also easy to implement a BCD abstract data type in Ada.

GNAT 3.15p (GNU Ada 95) is an excellent GPL open-source compiler and is a available as a free download.

Britt

                                                                        
Thanks for the hints everyone
Message #18 Posted by Dave Hicks on 18 Dec 2002, 1:41 a.m.,
in response to message #17 by Britt Snodgrass

I think it's time to do a little compiler collecting. You can never have too many compilers.

                                                                              
Re: Thanks for the hints everyone
Message #19 Posted by Britt Snodgrass on 18 Dec 2002, 1:51 p.m.,
in response to message #18 by Dave Hicks

GNAT is available at ftp://ftp.cs.nyu.edu/pub/gnat/3.15p and ftp://ftp.cs.nyu.edu/pub/gnat/3.15p/winnt/.

I just noticed these aren't at the top of a Google search list. Google seems to be less helpful lately :(

                                                                              
Re: Thanks for the hints everyone
Message #20 Posted by Les Bell [Sydney] on 18 Dec 2002, 4:48 p.m.,
in response to message #18 by Dave Hicks

Don't forget Digital Research's PL/I Subset G compiler. There were versions for CP/M-80, CP/M-86 and MS-DOS, which came with terrific documentation - actually a really good textbook on structured programming.

For general PL/I resources, start here: http://www.users.bigpond.com/robin_v/resource.htm

For DRI PL/I-86 for DOS, see http://www.retroarchive.org/cpm/archive/unofficial/binary.html

I wrote a lot of code during the early eighties with the DRI compilers, as well as the Access Manager (really Faircom's c-Tree product) and Display Manager add-ons. Lovely stuff. . .

Finally, decimal arithmetic was also a feature of various "Business" BASIC's, starting with Gordon Eubanks' CBASIC.

Best,

--- Les [http://www.lesbell.com.au]

                                                            
Re: What languages?
Message #21 Posted by Massimo Gnerucci (Italy) on 17 Dec 2002, 7:44 a.m.,
in response to message #15 by Dave Hicks

I remember a BCD version of the good old Turbo Pascal for DOS; back in the mid '80s.

Ooops, here it is: http://community.borland.com/article/0,1410,20792,00.html

Massimo

                                                
Re: Misunderstood Precision
Message #22 Posted by Paul Brogger on 16 Dec 2002, 12:01 p.m.,
in response to message #13 by Dave Hicks

. . . which brings to mind another example of my own naiveté: After using a slide rule for a while, I bought my HP-21. Shortly thereafter, I packed it up & shipped it to Corvallis, complaining about a "four parts per billion error in the SIN function" . . .

I got it back shortly, with a note explaining the algorithm involved and claiming that "the unit operates according to specifications."

. . . and I wonder what other simplistic notions are being formed by generations fed a diet of digital displays, video game virtual worlds, and half-hour sitcom issue-bites with their cute, tidy dénouements . . .

                                                      
Re: Misunderstood Precision
Message #23 Posted by Dave Hicks on 17 Dec 2002, 1:23 a.m.,
in response to message #22 by Paul Brogger

I remember when the "Pentium Bug" was discovered - on the HP 48GX. It was a similar level of error.

                                                            
Pentium Bug jokes!
Message #24 Posted by Karl Schneider on 18 Dec 2002, 1:12 a.m.,
in response to message #23 by Dave Hicks

Wasn't the Pentium bug discovered around 1993? A Yahoo search revealed a site with compilations of the resulting jokes. Remarkably clever and witty!

Pentium Bug Jokes

                                                                  
Here's another article I've always liked
Message #25 Posted by Dave Hicks on 18 Dec 2002, 1:51 a.m.,
in response to message #24 by Karl Schneider

Not about the flaw but...

John Dvorak PC Computing Magazine March/April Issue

                                                            
Re: Computer Humor Hall of Fame
Message #26 Posted by Paul Brogger on 18 Dec 2002, 10:03 a.m.,
in response to message #23 by Dave Hicks

If I may nominate an article, one of my all-time favorites is this.

                                                                  
Re: Computer Humor Hall of Fame
Message #27 Posted by db(martinez,california) on 19 Dec 2002, 12:47 a.m.,
in response to message #26 by Paul Brogger

paul; thanks. i needed that.

                                                
Re: The chip makers "corrected it" years ago
Message #28 Posted by Fred Lusk (CA) on 19 Dec 2002, 2:12 a.m.,
in response to message #13 by Dave Hicks

Dave…

Thanks for your reply. The TurboPower looks interesting, but I'm not sure I actually have a use for it.

I like your suggestion about taking this up with IEEE…but how, exactly, does one do this?

Regarding your 1/3 example, I don't see a need to type in all those trailing zeros. From an arithmetic (rather than scientific/engineering) standpoint, 1 and 1.000000… are identical, so the trailing zeros add nothing (pun intended). On the other hand, it would be nice to be able to specify the desired precision by adding the appropriate number of decimal places.

Fred

                                                      
Re: Why the extra zeroes . . .
Message #29 Posted by Paul Brogger on 19 Dec 2002, 11:55 a.m.,
in response to message #28 by Fred Lusk (CA)

If I may chime in, I think Dave was suggesting that, with an "intelligent" compiler, adding the zeroes would force greater precision in the intermediate calculation, and in the internal representation of the result.

Some systems decide, for example, to use a floating-point division algorithm rather than a (quicker) integer divide, if a "." is coded after a literal dividend or divisor. For example, "1./3" (or "1./X") might force a floating-point algorithm and result, whereas something like "1/3" might be compiled as an integer divide.

So, in such a context, the decimal point (and, perhaps, the extra zeroes) DO add "something" -- increased precision. (That was a nice pun, though!)

                                                            
I thought this was what Fred was proposing
Message #30 Posted by Dave Hicks on 19 Dec 2002, 1:06 p.m.,
in response to message #29 by Paul Brogger

ie:

"Even with floating point math, it should be very easy to count the number of decimal places going in so the result can be reported properly coming out. "

Now if "1 and 1.000000… are identical" then I don't see what is being proposed, unless 1.00000/3.00000 should still be zero? Would a user have to "add something" like 1.00001/3 in order to get an answer closer to .33...?

I'm probably misunderstanding the idea.

                                                                  
Re: I thought this was what Fred was proposing
Message #31 Posted by Michael F. Coyle on 19 Dec 2002, 11:28 p.m.,
in response to message #30 by Dave Hicks

No, you wouldn't do 1.000001/3; the idea (if *I'm* understanding it correctly is that:

x = 1.00/3.00 would give 0.33 y = 1.0000/3.0000 would give 0.3333 etc.

Note that 0.33 and 0.3333 probably don't have exact binary representations, so I'm not sure how well this would work in practice. Also note that the test "if x=y" would fail, even though both variables "ought to be one-third".

And what result should we give for 1.00000000/3.0 ?

(I had written a diatribe about how PL/I tries to do it all wrt arithmetic declarations and IMHO, sometimes causes more problems than it solves. But I've mecifully omitted it here.)

                                                                        
Re: I thought this was what Fred was proposing
Message #32 Posted by Dave Shaffer on 20 Dec 2002, 10:57 a.m.,
in response to message #31 by Michael F. Coyle

re "And what result should we give for 1.00000000/3.0 ?"

That one's easy - if you follow the rules for significant figures: it's 0.33 (2 significant figures). I berate my physics students on this subject all semester long - some of them get it, some of them don't.

As to the difference between expressions like 1.00/3.00 and 1.00000/3.00000 in an IF test - that's why numerical IF tests should be tests within some tolerance rather than "IS X = Y?" (Didn't my comment like that occur w-a-a-a-a-y back in this thread!!??!!)

                                                                              
Re: Precision in "IF" tests
Message #33 Posted by Paul Brogger on 20 Dec 2002, 2:04 p.m.,
in response to message #32 by Dave Shaffer

Right. I suppose one could implement this "precision-aware compare" as a subroutine or macro that first renders the two numerical arguments to the specified precision, and then compares those temporary results . . .

And, while we're at it, shouldn't a string compare take into account the languages of the values compared? (E.g., "IF 'rojo'(spn) = 'red'(eng) THEN . . ." 8-)

                                                                              
Significant Figures
Message #34 Posted by Michael F. Coyle on 20 Dec 2002, 2:32 p.m.,
in response to message #32 by Dave Shaffer

Mea culpa. You're right of course about 1.00000000/3.0 = 0.33.

                              
Re: HP-35 Documentation
Message #35 Posted by Jim Cathey on 19 Dec 2002, 8:43 p.m.,
in response to message #10 by Fred Lusk

The problem is that computers generally work in binary floating-point, not decimal fixed-point. They do this because of speed. Decimal math is slower, and all hardware optimizations have been done on binary math only. The inaccuracies you complain of generally come in in the decimal->binary->decimal conversion process for non-integers, combined with the nature of floating point math. It is criminal that spreadsheets don't offer a BCD math provision, but there it is. It's not the computer's fault, it is the programmer's.

While I was long aware of this, a number of years back I worked at a company that supplied computers to banks. Our prior product line was Z-80 assembler based, and used a BCD library to do all its financial math. No problem. The new line used C on a 68K, and as C had floating point, they attempted to use it for all math. What a disaster! I gave a number of lessons to some programmers at that time. My analogy was that using floating point math for financial calculations was like finding out how much money was in your wallet using calipers. Floating point (and analog measurement techniques) have an inherent plus/minus characteristic that's pretty unwelcome in financial calculations. I think the final solution for the innards of that software was to use floating-point math on pennies or mils (as integers), with a couple of BCD routines to handle the troublesome stuff. This was a compromise that kept most of the speed of FP, without the mistakes.

BTW, these problems were detected during internal development, and weren't ever shipped to customers.


[ Return to Index | Top of Index ]

Go back to the main exhibit hall