Re: CPU's and Precision Message #7 Posted by Bill Wiese on 24 Jan 2003, 3:49 p.m., in response to message #6 by Joerg Woerner
Joerg's right... HP's orig calc architecture took place at a time when memory space and transistor count was at a premium. Thus, we had a serial architecture, with instructions optimized for BCD floating-point arithmetic. Keyboard/display scanning was hardware-assisted so as not to burn up ROM code. IIRC, the whole HP35 took 1Kx10bit words of code, which is pretty phenonmenal, and the main CPU took 3000 transistors.
Nowaday, for low-cost products, mask ROM memory is cheap. (RAM does take a bit more area.) So any old 8-bit CPU will work, and even the overhead for using compiled code is tolerable. And speed is not an issue either for a hand calcuator, so selection of which CPU to use is largely by price and I/O/peripheral capabilities, as well as available tools (compiler, debugger, hardware/software emulators).
On orig HP architectures, the hardware design, memory layout, registers, etc. was oriented around 56 bit BCD floating point values. It would be hard to do (or take quite a bit of extra coding), say, 14-digit mantissa BCD math on a Coconut architecture CPU. Saturns were widened to handle more precision & dynamic range.
On a byte-oriented processor, this isn't an issue, which is why you see some HP calcs w/24-digit precision.
(I really wonder about internal algorithms, accuracy, etc. though on these cheap calcs. Just because XxY is good to 24 digits doesn't mean the transcendentals are - or that exp(ln(x) = x doesn't mean that exp(x) and ln(x) are indeed that accurate: they could actually have symmetric defects.
I'll have to run the 30S against Maple results.
Bill Wiese
San Jose, CA
|