|Re(long): exponents in 14-nybble BCD floating-point representations|
Message #3 Posted by Vieira, L. C. (Brazil) on 12 Apr 2006, 1:12 p.m.,
in response to message #1 by Cameron Paine
Hi, Cameron; good reading your posts again (BTW, is it an impression of mine or you've been 'out' for a while?)
Thank you for considering my name as a possible 'consultant'. Same applies when I thank you for deciding to share the subject d8^D. The more people reasoning about it, the best.
BCD coding is something I take much care when explaining to my students. I do not go in deep when considering all possibilities because it is a representation 'closed to human', and the system might provide support to efficiently handle BCD representation and related operations. Otherwise, all BCD handling routines should be written to allow them to generate the expected results.
Following considerations are based on my own observations and may not be accurate, neither correct if detailed internals are not the way I envision.
I remember finding many references to BCD coding as usually addressed to small, portable systems, but I am not sure if the same applies to current devices. Based on what I know, when your system is portable and extensively handles floating point operations, if BCD handling is embedded in the system, final coding is smaller and more efficient. I think this has to do with Thomas Okken considerations.
The HP41 nut processor is 56-bit based, and as the HP41 is based on its predecessors, at the same time Voyagers are based on it, taking the HP41 as part of this subject may lead us to a common sense. There are some particularities about the HP41 internals that came to my attention after reading some material about MCODE. The way nibbles are interpreted and handled may also reflects how the special, system registers are organized (a, b, c, d, etc.). The ‘three-nibble’ based engine is particularly effective when we have in mind that the nut processor handles 10-bit instruction codes.
If we consider technological advancements, the RISC-based processors are faster even when we consider that they demand bigger source codes (codes with more lines) when compared to CISC-based processors. If we consider that the HP42S emulates all basic HP41 functionality and that it does not use the same memory structure, then we must consider that all system code is new, and that it handles numbers in a different way. In fact, it is a lot easier today to enhance functionality by keeping (or even reducing) hardware ‘size’ and increasing clock frequency plus the number of lines in the main code.
SO… (it is about to conclude, just a bit more) I for one consider that the floating point organization chosen by the designers for the 56-bit based processors was something like the best ‘cost × efficiency’ balance. If the system already offers some BCD functionality, let’s take advantage over it. At that time, memory chips were not inexpensive as they are today, and time spent writing code might also consider time programmers spent to know what the processor the code was written for was capable of doing. Today, using a -500 to 500 exponent of ten is a matter of software design, not necessarily a restriction of the processor design. But we can go back there and point out that the HP71 used this exponent range… As you wrote:
Quote:I’d guess that the one you call the ‘brute force’ has the bigger code and does not care too much for particular resources, meaning you mainly wrote all you needed. The so called ‘clever’ one makes me think that you decided to ‘shrink’ the code and use more inner resources, right? As a result, you consider the second one a complete pain to debug. Well, at least you have the option to go ahead with the larger memory space. As Jacques Laporte mentioned in his post about the HP35 :
I have two quite different implementations that I'm playing with: a brute force one and another that is so "clever" that debugging it is becoming a pain. I'm leaning towards the brute force one. It's much more fun to watch while stepping through it with the debugger.
Quote:I’d seriously consider that much of the decisions to use this or that approach to BCD handling used to be local, though. Laporte’s analysis, amongst other good ones, are very good references to be considered in these cases.
Reading your words “remarkable mind job”, I think to the man who debugged this code, under maximum pressure, in 1972.
Think that the code is crammed in 768 words: no room left in these 3 ROMS. Only one “no operation” at fixed address 00045 in ROM 0 (there is no key code “45”) ; you can’t move it, you can’t use it.
It was a kind of constant-sum game. For 2 instructions added somewhere (and that was the case with the exp(ln((2.02))) problem), 2 other instructions had to be removed, and in the same ROM!
You wrote you do not own an HP 35 ; in fact the algorithms evolved of course (Classic, Woodstock, Spice …), mainly on the precision issue. But the approach in the transcendental functions remained the same. Here, the name of Dave Cochran must be cited. He is the man who implemented Cordic in the 9100 and 35 calculators, based on the J.E. Meggitt’s paper, and made it possible.
As for closing the post, forgive me not answering your questions… Instead, I added almost philosophy and a bit of history.
Edited: 12 Apr 2006, 1:23 p.m.