# HP Forums

Full Version: New Approach Could Sink Floating Point Computation
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Don't blame me for the title of the post, that's the title of the article! Anyway, given the broad range of mathematical discussions on this forum, I thought this article would be of interest to some of the members. The article discusses Posits a proposed replacement for IEEE Std. 754 for floating point numbers and operations. Posits are the work of scientist John Gustafson.

https://www.nextplatform.com/2019/07/08/...mputation/

The article itself doesn't go into great detail about Posit implementation but contains links to published work that does. The main claim to fame of Posits is that they use fewer bits, perform arithmetic operations faster, and have fewer representational problems and errors than IEEE floats. Naturally there's plenty of argument over such claims and the posted comments to the article go into some detail about them, including responses by Gustafson himself. I found the comments section a good read.

A number of organizations, such as Lawrence Livermore National Lab, are investigating Posits. The article lists several US/EU research organizations looking into the number format, and possibly using them in upcoming projects like the Square Kilometer Array.

So, if anyone is looking for a challenge for their homebrew calculator math implementation, here's a chance to do something really different!

Cheers,
~Mark
I used to have a copy of his book, The End of Error. It's a decent read and he is persuasive. There has been some disagreement from inside the numerical analysis community (there is a video of a debate between William Kahan and John Gustafson somewhere).

The premise of the number system is to dynamically vary the length of numbers as required and to represent the gaps between exact values using intervals (hence the book title: round off isn't possible in this system because every number is either represented exactly or is part of an interval). It is a neat idea, but it doesn't mean the end of numerical analysis or error.

The variable length is where the claimed benefits come from. So long as the variations can be dealt with efficiently, less bits are required. This saves memory and power. Faster might be a misnomer. The reality is that modern processors implement IEEE floating point in hardware and this system is done in software -- it will be slower and use more power but it could still require less memory. A properly optimised hardware implementation would make things interesting.

Pauli
THE END of ERROR
Unum Computing
John L. Gustafson

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2015 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

Preface
“The purpose of mathematics is to eliminate thought.” —Philip G. Saffman (1931 – 2008)

Formulas, numbers, and computer code all make a book harder to read. The densest code is relegated to the appendices to relieve this, and in general the reader does not need to read the code sections to understand what follows. This book was written in Mathematica, a good way to make sure the math is sound and the graphs are correct. In one example, a four-bit “unum” value gets the right answer whereas a quadruple precision (128-bit) float fails; the main reason the code is shown is because some skeptics would otherwise not believe such calculations are possible, and would suspect smoke-and-mirrors. Everything needed to define a complete, working, rigorous computing environment is here.
None of the new ideas presented here are patented or pending patents, and that is quite intentional.
The purpose of the unum approach is to eliminate thought.

Acknowledgments

What kind of nut cares this much about computer arithmetic? I have had the honor and pleasure of working with and meeting several members of the equally obsessed.

I save my deepest appreciation for last. In 2012, I visited a legendary Berkeley professor named William Kahan just after discovering the ubox method. We spent six hours talking about everything from the IEEE Standard to n-body dynamics to car repair. The cool thing about the visit is that I got to thank him for introducing me to numerical analysis. You see, Professor Kahan was the man who created the software for the HP calculators that introduced me to the hazards of rounding error. He has therefore been my tutor in numerical analysis, one way or another, for almost forty years. {emphasis mine}

Thank you, Professor Kahan.

BEST!
SlideRule

my apologies to all: the above narrative is an extract from the aforementioned publication. I should have clearly stated this upfront.
.
Hi, mfleming:

(07-15-2019 07:50 PM)mfleming Wrote: [ -> ]Anyway, given the broad range of mathematical discussions on this forum, I thought this article would be of interest to some of the members. [...]

Indeed, thanks for posting (and commenting) the reference.

Quote:Naturally there's plenty of argument over such claims and the posted comments to the article go into some detail about them, including responses by Gustafson himself.

I've read the article and in particular the posted comments, including one by John L. Gustafson, where he says (the highlighting is mine):

John L. Gustafson Wrote:.
Whereas IEEE 754 mandates the setting of flags when there is rounding, underflow, and overflow, four decades have shown that programmers do not use them. Programming languages do not support them [...] And you can’t view the ‘inexact flag” in any language other than assembler.
.

This is blatantly incorrect, an obvious counterexample is HP-71B BASIC, which fully complies with IEEE 754 and supports all those flags. And of course, you can view the inexact flag as well as all the others directly from BASIC (or the command line for that matter) using BASIC statements, no need to use "assembler".

I don't understand why Gustafson made such incorrect statement after having extensively dealt with Prof. Kahan, which obviously knew everything about the HP-71B and its capable IEEE implementation as part of its BASIC language, and surely must have mentioned it to Gustafson at one time or another.

Perhaps John L. Gustafson doesn't consider HP-71B BASIC a programming language ? Or maybe Prof. Kahan !?. Or both !?

V.
.
(07-16-2019 12:38 AM)Valentin Albillo Wrote: [ -> ]Perhaps John L. Gustafson doesn't consider HP-71B BASIC a programming language ? Or maybe Prof. Kahan !?. Or both !?

V.
.

Ah, I seriouly doubt Gustafson examined every single programming language before making such an all encompassing assertion. Easy to exaggerate the case after examining a subset of languages, especially if you want to drive home a point at the price of a loss of accuracy. I'm sure that annoys a subset of his detractors! Anyway, Engineering is the art of calculated tradeoffs, right?

What caught my attention as an EE was the reduced power consumption . The Square Kilometer Array project mentioned in the article is budgeting 10MW of power for their supercomputer infrastructure. Zounds, how large a city would that power?

Sliderule, sounds like you have had some very fortunate moments in your career...
~Mark
I suspect he was defining programming language support as being native to the language definition.

ISO C99 does support the inexact flag via the fenv.h header and a library function call. I very much doubt he neglected C99 by accident.

As I mentioned, he seems to be persuasive.

Pauli
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :