|Re: RPN, transcendentals, reverse-engineering, etc|
Message #4 Posted by Bill Wiese on 26 Dec 2002, 2:59 p.m.,
in response to message #3 by Glen Kilpatrick
>[Glen Kilpatrick wrote:]
>It's not conceivable that contemporary engineering
>would be needing manuals in order to reverse-engineer
>RPN. There are still too many "examples" already extant
>(although I've always wondered if HP used Chebyshev
>polynomials to calculate transendental functions, some >references to internal specs, *not* in manuals, but HP
>probably has the originals, might clarify exactly the
Yes, whether the calc is RPN or Algebraic is independent of the algorithms used for log/trig functions. And any mildly competent programmer can create an RPN-like shell, etc in the C language. The orig challenge in doing the orig HP or even TI calcs was to do a whole calc - including log/trig funcs - in about 1-2 Kwords of ROM. (As I recall, roughly equivalent functionality TI calcs took more ROM space than
similar HPs becuase TI used a kludgy 4-bit microcontroller, while HP's instruction set was designed from the outset to save code space for calc applications.)
Even today, I'd say that a lot of Asian calc mfgrs don't necessarily know a whole lot about their algorithms.
HP, at least thru the 41 series, seems to use CORDIC shift/add algorithms for higher-order functions (not square root - that is done via an algorithm that roughly looks like division). Between some pre- and postprocessing steps the CORDIC algorithm delivers one digit of the result (within the internal processing domain) per iteration. There may be range reduction/scaling issues ("unwinding the circle" for trig functions for example), and some postprocessing (say, deriving one trig function from another or change of bases of logs) before the *actual* result is returned. When later calcs (42,32,48, etc) came out that supported a higher dynamic range (+/-9.9E+/-499 instead of +/-9.9E+/-99) there could've been some algorithmic changes. But I don't think so - I think fundamental algorithms have had enough time spent on them, modelling them, testing, etc that a co like HP would be loath to change them. CORDIC was suitable for log/trig functions on older/slow calcs because it wasn't that much more complex than division.
Other methods do exist: rational function approximation and Chebyshev (and other) polynomials. Some of these require use of division. And contrary to what you learned in calculus class, Taylor's/McLaurin's series aren't that useful for accurate & efficient function approximations.
These involve approximating the function under question over a VERY LIMITED RANGE and pre/postprocessing results.
X86 and other CPUs with floating-point engines have changed design over time. The 80x87 NPU chip, and its on-board derivatives up thru the 486, used CORDIC methods for log/trig functions: they were controllable algorithms with predictable error behavior and didn't take much microcode space on these NPUs.
However CORDIC is essentially serial in nature: you're delivering one bit, hex or BCD digit per iteration. [I believe, IIRC, if you try to speed things up taking bigger "chunks" (say, 4 digits at a time) there is no gain in internal behavior becuase, well, "if you bite off too much" on one iteration, you may often "have to spit some back".] And then internal tables of shifted input values that drive CORDIC would have to be large too.
To have faster results - when fast hardware multiply and divide are available, then nonCORDIC methods may be superior (in terms of speed; error behavior may need some real modelling and analysis). Also some processing steps of these nonCORDIC methods could be parallelized (with additional hardware) if desired. The real art of these designs is to take a system that has an N-bit register set and deliver N bits of result - instead of, say, N-2 result bits with the last 2 bits being "slop". AMD, Intel and Cyrix all have differing methods of doing their transcendentals. There are some papers out there about how Cyrix did theirs, as I recall.
San Jose, CA