The Museum of HP Calculators

HP Forum Archive 10

[ Return to Index | Top of Index ]

clarification of 'microcode' term - not really applicable to HP calcs
Message #1 Posted by Bill Wiese on 24 Jan 2003, 2:15 p.m.

{set FLAME_MODE=On;} :)

I get a bit aggrieved when I hear the term 'microcode' being used pretty synonymously with 'machine code', 'firmware', etc. This term is abused not just here in HP calc-land but in service contracts (i.e., microcode update meaning firmware update, etc.) and lots of descriptions of ROM-based firmware.

Microcode is the result of "microprogramming" work and is really the content of a CPU's "writeable control store" (WCS) memory, which can either be a small ROM or a PLA (which is a logic array with a collapsed number of states, kinda a minimized ROM that takes less gates/area.)

Microcode talks directly to the inner hardware of a CPU and is used to define "macroinstructions" which are the normal machine code/assembly instructions that system & application firmware programmers use. Many bits of a microinstruction are just used to bank various pieces of CPU hardware in/out, select registers, etc. Microcode is often 'wide' to achieve decent parallelism - an 8-bit processor could have 30 to 80-bit wide microcode. It is usually not that deep because each instruction (one would hope!) only takes one or a few microcycles. The original VAX instruction set, for example, took 16K words of 42-bit wide microinstruction memory to implement its 400+ instruction set. Some VAXen also had a RAM area to allow customer to create user-written instructions in microcode.

It's true that some very very simple control apps could be written entirely in microcode (fast state machine controller, etc.) but in general microcode manipulates a CPU's inner hardware to render the instruction set opcodes viewable to the outside world.

Microcoding's original concepts were to allow hardware changes at the lowest level while presenting a common 'user interface' at a higher level. The actual CPU architecture internals over a CPU line, back in mainframe days, could differ substantially yet app programs could remain unchanged (same instruction set).

Also, in the days when RAM was slow, CISCy architectures really made sense: a slow instruction fetch from RAM, core, etc. would translate into a series of very fast microinstructions that executed out of very fast but limited depth instruction memory. The slower the main memory, the deeper the microcode per instruction can be. As memory speeds up, the microcode needs to get more done in parallel, so a redesign of inner CPU architecture might start using wider microinstructions with more work getting done each microcycle.

Another way of thinking: the CISC instructions represented, in a way, 'compressed data' and were decompressed within the CPU into a sequence of microinstructions. The microcode in effect, can also be thought of as an internal L1 cache!

Many processors are NOT microcoded - for example, RISC or RISC-like CPUs. The instruction word has some very simple logic decoding that selects read/write, type of ALU op, etc. and the opcodes+operands essentially directly drive the CPU hardware.

Our HP calcs' instructions have (say, in Coconut CPUs) a 10bit instruction path width (serialized). These instructions are at the lowest level of hardware control and are not 'interpreted' by a microengine. Thus content of HP ROMs w/machine code is NOT microcode.

Really the only way you could regard HP instruction ROM content as microcode is if you regard regular HP RPN programming as an assembler instruction set, but that's a little too high-level for my mind: I think of HP RPN programming as a higher-level language like C or BASIC that's interpreted by a program written in assembler.

Bill Wiese San Jose, CA

      
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #2 Posted by Andrés C. Rodríguez (Argentina) on 24 Jan 2003, 5:05 p.m.,
in response to message #1 by Bill Wiese

It is good to state this issue cleraly as you did. I share your view 100%

      
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #3 Posted by John K. (US) on 24 Jan 2003, 10:07 p.m.,
in response to message #1 by Bill Wiese

Well said! And, for the most part, I agree with you. But...

> Really the only way you could regard HP instruction ROM content as microcode is if you regard regular HP RPN programming as an assembler instruction set, but that's a little too high-level for my mind: I think of HP RPN programming as a higher-level language like C or BASIC that's interpreted by a program written in assembler.

Wow. I don't think I've ever seen anyone lump C and BASIC into the same category before... :^)

At the risk of sounding pedantic (well, more so than usual, anyway), the programming "language" used in the 41-series is strikingly similar to assembly. It consists of, essentially, human-readable mnemonics which are converted directly to opcodes and data fields for storage and execution, and are "disassembled" back into mnemonics when "program" mode is entered. (Indeed, synthetic programming techniques rely on this behavior to create "synthetic" opcodes.) I'm not aware of any "high-level" language that translates to machine code as cleanly. Even a relatively low-level 3GL like C contains an awful lot of syntactic sugar compared to assembly.

In fact, the entire experience of programming a Coconut system is a lot like working in a low-level system debugger (on-the-fly dis/assembly, step execution, etc.), but with a very small code window around the PC. Heck, if one is really daring, it is even posible to perform "stupid register tricks" with the PC itself... ;^)

            
Re: clarification of 'microcode' term - not really applicable to HP calcs [HP41 lang HLL? assy? ]
Message #4 Posted by Bill Wiese on 26 Jan 2003, 6:14 a.m.,
in response to message #3 by John K. (US)

Perhaps we're pedantic but this does merit discussion.. I'll intersperse my comments with yours... ********************************************************* Wow. I don't think I've ever seen anyone lump C and BASIC into the same category before... :^) .......................................................... They're both 3GL procedural languages. I love, live, eat and breathe C and this was not to disparage it. BASIC tries to protect the programmer a bit more from himself (range checking, etc.) and BASIC brings some library functionality into the language (PRINT, INPUT, etc) while C as a language can be separated from library (printf(), fgets() are not truly part of C). But many more-than-minimal BASICs offer pointer access & dereferencing, varying loop types beside just FOR/NEXT, etc. Plus the concepts of allocation, as well as string handling do differ, but this is more due to heritage: C was always compiled, while many BASICs were only interpreted.

********************************************************* the programming "language" used in the 41-series is strikingly similar to assembly. It consists of, essentially, human-readable mnemonics which are converted directly to opcodes and data fields for storage and execution, and are "disassembled" back into mnemonics when "program" mode is entered. .......................................................... This is not really different behavior from that of a BASIC interpreter. Type in a program line in BASIC, and it will be 'tokenized' before storage: the keywords/commands will be converted to shorter symbols (bytes) and stored. These are NOT opcodes, which represent actual machine instructions. When the program (or program line range) is LISTed, these tokens are 'unwound' back into their readable textual equivalents. To actually execute a statement, dozens or hundreds of machine opcodes+operands are processed, with some of them being overhead for fetching and managing the tokenized symbols.

********************************************************** I'm not aware of any "high-level" language that translates to machine code as cleanly. Even a relatively low-level 3GL like C contains an awful lot of syntactic sugar compared to assembly. .......................................................... Each statement in HP41C language does less, since there's no complex expressions or syntax - the most complex HP statement is something like, saym STO IND 03. Each HP statement still does quite a bit; firmware manages the operations on an abstract data type, the BCD floating point value. [True, the CPU is favorably disposed to this format and can operate on subfields of these values or combinations thereof.]

C was originally designed to replace assembly - it was supposed to be a 'faster assembly' (to write) and offer better performance than other supposedly higher-level languages of the time (FORTRAN and Algol, mostly). Many assemblers now - notably Microsoft's later MASM versions - include very useful pseudo-ops and 'structured programming' constructs like loop instructions, etc.

********************************************************** In fact, the entire experience of programming a Coconut system is a lot like working in a low-level system debugger (on-the-fly dis/assembly, step execution, etc.), ........................................................ While the analogy is broadly true, most modern HLLs include debuggers in their IDEs that allow statements to be stepped thru one line at a time, with 'watch' windows on desired variables, watching registers or memory areas as program executes, etc.

********************************************************

I'm just really saying that HP41 user language is separated enough from the machine that it's a unique HLL. User is protected from errors (range checks so no NONEXISTENT errors, operations on values out of range such as log of a negative number, etc.) thru quite a bit of firmware. Many cycles are burned just in the fetching & managment of 41C program statements, and the user is isolated from actual machine addresses. The fact that [PACK] is required or things need to be cleaned up after [GTO][.][.] indicates that the user is not really in a system space.

. . . . . . . . . Bill Wiese San Jose, CA

                  
Re: clarification of 'microcode' term - not really applicable to HP calcs [HP41 lang HLL? assy? ]
Message #5 Posted by John K. (US) on 26 Jan 2003, 8:34 a.m.,
in response to message #4 by Bill Wiese

>> Wow. I don't think I've ever seen anyone lump C and BASIC into the same category before... :^)

> They're both 3GL procedural languages.

It was a joke; note the smiley...

This is not really different behavior from that of a BASIC interpreter. Type in a program line in BASIC, and it will be 'tokenized' before storage[.]

Well, sort of. HP 41 programs are stored in memory as binary instructions. To use your example, STO IND 03 is stored as a two bytes: 0x9183. STO 03 would be 0x33 (it's a short-form store), but 0x9103 would be functionally identical. If these are "tokens," then they're doing an awfully good impersonation of opcodes. And, from a progamming environment POV, it's pretty much a wash since it is possible to construct these codes directly (more or less) by side-stepping the instruction-to-code conversion.

> Each HP statement still does quite a bit; firmware manages the operations on an abstract data type, the BCD floating point value.

Yes, but then so do all processors that I can think of off the top of my head. It's all quite similar to CISC microprocessors, and the DEC (PDP and early VAX architectures), DG Eagle, and HP 1000 and 3000 CPUs, in a simplistic (and safer -- see below) sort of way. The only real difference (since most modern machines can handle floating-point math, if not BCD) is that the 41 determines data type by looking at the first nybble of the data, rather than relying on the instruction. Of course, such things are generally handled in microcode on the mini-computers I mentioned... ;^)

> most modern HLLs include debuggers in their IDEs that allow statements to be stepped thru one line at a time, with 'watch' windows on desired variables, watching registers or memory areas as program executes, etc.

And a grand thing it is, too! :^) But none of the ones I'm aware of will allow you to fiddle with the instructions of an executing program like my old 6800 and 68K debuggers would. It's probably a good thing that they don't, but it was kind of fun...

> The fact that [PACK] is required or things need to be cleaned up after [GTO][.][.] indicates that the user is not really in a system space.

Of course, in a system with variable-length instructions like the two I mentioned above, when a eight-byte instruction is replaced with, say, a four-byte instruction in core, it was common practice to add one or more NOPs after the new instruction to keep the offsets the same and prevent the program from getting lost in the weeds. That's pretty much how the 41 does things, though it can also insert space into the code in register-sized chunks -- something which would have been really cool in a debugger...

> I'm just really saying that HP41 user language is separated enough from the machine that it's a unique HLL. User is protected from errors (range checks so no NONEXISTENT errors, operations on values out of range such as log of a negative number, etc.) thru quite a bit of firmware.

I'm not sure that I agree that things like range checking and data validation are indicators of a HLL -- after all, C doesn't really do any of that. It will gleefully allow you to blow right past the end of an array. Indeed, when you ask it to create a pointer, it helpfully zeros it for you, and if you insist on scribbling to that address, C won't bat and eye, though your system will generally throw a few choice words in your direction afterwards. Assuming it has memory protection, of course.

Still, I see your point. But if someone were to write an assembler that was incapable of translating jump or branch instructions to arbitrary addresses (directly or indirectly), forcing the programmer to use labels, it would would be remarkably similar -- if somewhat less than useful. And there are synthetic methods to get around those protections to twiddle bits in almost any of the system registers (including the PC) as anyone who has experience a MEMORY LOST message can confirm. :^)

                        
Re: clarification of 'microcode' term - not really applicable to HP calcs [HP41 lang HLL? assy? ]
Message #6 Posted by Bill Wiese on 26 Jan 2003, 4:09 p.m.,
in response to message #5 by John K. (US)

More interspersed comments...

********************************************************** Well, sort of. HP 41 programs are stored in memory as binary instructions. To use your example, STO IND 03 is stored as a two bytes: 0x9183. STO 03 would be 0x33 (it's a short-form store), but 0x9103 would be functionally identical. If these are "tokens," then they're doing an awfully good impersonation of opcodes. .......................................................... You can look at a token table of a BASIC interpreter. Typically, bytes with high bit set (>=$80) indicate BASIC keywords and operators. PRINT A,B could be $84 $41 $2C $42 for example. Z=PEEK(X) could be $5A $FC $9D $40 $58 $41. '=', '+', PRINT, PEEK are all tokens. (So are parentheses but simpler BASICs leave expressions as-is without rearranging or "RPN'ing" them.) Lexicals (var names, text, etc are left alone).

******************************************************** from a progamming environment POV, it's pretty much a wash since it is possible to construct these codes directly (more or less) by side-stepping the instruction-to-code conversion.

......................................................... Yes and I can build/read a tokenized BASIC program 'by hand' in a similar fashion. (Or pick a language - there ARE C interpreters out there.)

********************************************************* (re: debuggers) And a grand thing it is, too! :^) But none of the ones I'm aware of will allow you to fiddle with the instructions of an executing program like my old 6800 and 68K debuggers would. It's probably a good thing that they don't, but it was kind of fun...

........................................................... Oh many embedded system IDEs can let you tweak with contents of program space while executing, esp if there's a hardware emulator with dualport RAM involved for realtime use. And many CPU simulators allow you to do this too.

******************************************************** I'm not sure that I agree that things like range checking and data validation are indicators of a HLL -- after all, C doesn't really do any of that. ......................................................... Um they are not REQUIREMENTS of HLLs. But they rule out fundamental lowlevel use. (Even on CPUs with memory protection, a bad memory access is attempted without range testing; when the MMU barfs a seg fault error interrupt occurs.)

****************************************************** And there are synthetic methods to get around those protections to twiddle bits .................................................... Synthetic programming is an 'accident'. There was NO intention of ever letting users touch this stuff. I don't think HP 41 app modules used synthetics either (correct me if I'm wrong) - when they wanted performance or special features, there was HP assembler available to do things 'the right way'.

Bill Wiese San Jose

      
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #7 Posted by David Smith on 25 Jan 2003, 12:12 p.m.,
in response to message #1 by Bill Wiese

But then there are some references in HP documentation referring to firmware as microcode...

            
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #8 Posted by Andrés C. Rodríguez (Argentina) on 25 Jan 2003, 5:16 p.m.,
in response to message #7 by David Smith

I support the "microcode" = "control store" meaning. Firmware is software just recorded in a permanent manner. In my opinion, an HP41 Math Pac is firmware, although usually we keep the firmware term for machine (system) code and not for "user language" code.

If HP says firmware (even system code such as the 10-bit word ROMs) is microcode, well, it is just another mistake from a "usually" (should I say "formerly"?) respected source...

                  
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #9 Posted by HrastProgrammer on 26 Jan 2003, 2:26 a.m.,
in response to message #8 by Andrés C. Rodríguez (Argentina)

It is so nice to be smarter than HP. If HP says 'microcode' than it is microcode. What is the problem about it?

                        
Re: clarification of 'microcode' term - not really applicable to HP calcs
Message #10 Posted by Bill Wiese on 26 Jan 2003, 6:32 a.m.,
in response to message #9 by HrastProgrammer

*** It is so nice to be smarter than HP. If HP says 'microcode' than it is microcode. What is the problem about it? ***

HP *writers* and manual folks call it microcode. Doesn't make it correct or preferred usage, etc.

Microcode and microprogramming and control-store concepts (both fixed and writeable variants) were envisioned/ developed by Maruice Wilkes approx 40 years ago. Since then industry tradition has generally carried on the concept that microcode defined what happened inside the CPU hardware at very very very low level and was used to formulate the operation of instructions used by system & application programmers.

Since it was usu contained in ROM (in core early on, or later silicon, sometimes as a state-reduced PLA) it got confused, terminology-wise, with ordinary ROM-resident firmware. Pretty soon anything that wasn't loaded off a disk or other media and was on a chip was called (incorrectly, I insist) microcode.

Another example: your PCs ROM-BIOS is NOT microcode in general. It's just code and data that 'does stuff' on your PC. However on later Pentium PCs there's a small area (about 1K) of data in that ROM that downloads INTO the Pentium's WCS (writeable control store) buffer that has new microinstructions to fix otherwise buggy regular CPU instructions. [It is apparently possible for *some* buggy Pentium *hardware* instructions, as well as microcoded ones, to be revectored to CPU-internal-RAM-resident microcoded replacements, albeit these replacements may run slower than a correct one in hardware would.]

Bill Wiese San Jose, CA

                              
Whither an explanation of 'microcode'?
Message #11 Posted by Paul Brogger on 27 Jan 2003, 10:55 a.m.,
in response to message #10 by Bill Wiese

Does anyone know of a site with a simple (perhaps graphic) explanation of the HLL / Assembly / Microcode/ CPU Hardware hierarchy?

I've an acquaintance who occasionally uses computer analogies to help explain behavioral concepts, and he evinced a need for an additional level of "programming" between the real hardware (presumably, what we as individuals actually have to work with) and conscious behavior (i.e., Assembly or HLL programming). I suggested that microcode might serve as a simple analogue for habit or subconscious attitudes -- things that can be changed, but which profoundly affect how our intent is translated into action.

So, I'd like to find a site that explains (to the "layman") microcode in its context. (Though in the absence of an instructional site embellished with graphics, only the clarification of a few acronyms would be required to let Bill Wiese's opening post suffice nicely.)

                                    
Re: Whither an explanation of 'microcode'?
Message #12 Posted by Bill Wiese on 27 Jan 2003, 6:09 p.m.,
in response to message #11 by Paul Brogger

Paul...

OK, here, I'll try again.

In general, microprocessors, when viewed by the world outside of the chip, execute 'instructions'. Each instruction only does a fairly simple task: add two or three numbers together, test one number against another, get a value from memory, store a value to memory, change program flow via a jump to a new address, call a subroutine, etc. Each instruction consists of a minimum of an 'opcode' and perhaps one or more 'operands' - values that can give more info to an opcode describing a target address, a register or registers to work on, etc. Sometimes a CPU instruction - while conceptually quite simple - may have a degree of complexity with variations like indexed+offset addressing for arrays, etc.

But something must 'tell' the CPU 'how' to do things. While it is true an adder can add, and that a memory fetch means just putting an address on the bus and setting some control lines, there are lotsa little steps involved in doing each instruction.

The CPU has lots of units that need to be enabled or disabled for given operations, for example. These need to happen in parallel just for sanity's sake - and for speed. But a complex instruction might need to perform a couple of additions to form proper memory addresses for array indexing before an actual add or fetch or write is completed. These are serial dependencies.

So we have a collection of 'states' that are required for the CPU to do its thing. Some of these can happen all at once (parallel) and some must happen in sequence. These can be stored as bits in a wide microword or microinstruction in the CPU's microcode. Bits within this microinstruction enable/disable sections of the CPU, select the arithmetic operation to be performed, etc. If several steps that are dependent upon each other need to be performed in sequence, then several microinstructions need to be performed.

Microcode is kinda the 'DNA' of a processor. A sufficiently universal hunk of hardware could readily be a 486 or a 68000 or a PowerPC thru different microprograms. (This not cost effective now since it uses extra silicon. But a few yrs ago, Edge Computing had a chip that could run 386 or 68K code and wanted to run Mac & PC software on same machine with just one CPU. Cool but the market wasn't ready, etc.)

Microcode memory is sometimes known from the old mainframe days as 'control store': the storage of info that makes the processor behave the way it does in relation to the outside world (memory, peripherals, software, etc.)

A microcoded CPU takes an instruction and uses that to figure out where in the microcode it's supposed to execute. The microcode runs thru a few steps, the CPU does its thing, and then the microcode is ready to fetch the next instruction. The microcode/microprogram is in many ways really a smaller (well, tiny) program inside the CPU whose job it is to give the CPU its outward personality and define exactly how its instructions that are viewable to the outside world - thru applications & system programmers - operate.

Very few programmers ever touch microcode. It's burned into the CPU in its own ROM and never changes (unless CPU company releases a new processor version with update or bugfix.) [Since this ROM has some uniqueness to its patterns of 0s and 1s it can be sometimes be 'compressed' using some logic minimization techniques into a PLA (programmed logic array). This was done for speedup in earlier minicomputer days.]

Even though what microcode does is fundamentally simple, it can be a brain-frying programming episode to write in microcode. There is much to manage inside a complex CPU and complex side-effects must be avoided (say, internal bus contentions). Tracy Kidder's book about the Data General Nova, "Soul of a New Machine" did a pretty good nontechie job of covering some issues about microcode writing.

Lately, CPUs have become complex enough that bugs creep in. Intel Pentium IIIs and above, IIRC, have some RAM area inside for a microcode update to fix any bugs in any of the CPU's instructions. This would've been expensive in earlier days of mass-market microprocessors.

Mainframes and minicomputers often had similar 'writeable control store' in addition (or instead of) ROM-based microcode. Rather than for bugfixes, this allowed really skilled programmers to create one or more special-purpose instructions to optimize an application program for speed (or even perhaps memory use). I know several later midrange & up VAXes had this feature.

Many processors now do NOT use microcode - in particular, RISC processors like Sun Sparc, ARM, MIPS, etc. RISC CPUs in general have simpler instructions than their complex 'CISC' brethren. (RISC=reduced instruction set computer, CISC=complex instruction set computer) but often take more of these instructions to do the same job as one CISC instruction. Since each instruction is simple, the opcode and operands flow straight to the CPU hardware, and results straight out. Reduced complexity in theory offers simplicity of CPU design and layout and in theory higher speed. However RISC and CISC CPU designs are merging closer to each other: late Pentiums, for example, have an internal simple blazingly fast RISC microengine for many operations. Some later Sparcs, PowerPCs, etc. have some complex hardware-assisted instructions (though they are not microcoded).

Microcode was originated 40-50yrs ago by Maurice Wilkes so there was a standardized way of describing (and then programming) a CPUs internal operation and its instruction set - rather than pulling one's hair out with random logic designs. It also - because it was small and at right at the core of the CPU - was made from expensive fast memory, which was OK because not much of it was needed (esp in relation to system memory sizes.) Using complex instructions allowed fewer instructions to do more, esp important when instructions were fetched from slow main memory: microcode memory might've been 10X faster than main memory. (Differences are much smaller now.)

Several semiconductor companies used to sell 'bit slice' components with which one could make a custom microcoded board-level CPU. This was quite common for military, etc applications involving DSP (digital signal processing), special high-speed controllers for disk drives, etc. As regular CPU speeds soared in the 80s, DSP chips came about, etc. the need for this was vastly reduced - although people ARE still building their own custom CPUs on FPGA (field programmable gate array) chips for special applications.

Bill Wiese San Jose, CA

                                          
Re: Explanation of Microcode
Message #13 Posted by Paul Brogger on 28 Jan 2003, 9:41 a.m.,
in response to message #12 by Bill Wiese

Bill:

Thanks for the extra effort. I thought your opening post would be a good start, but I think you've just written the article I was looking for. You've outdone yourself. I'll pass on a link.

I read Kidder's book 'way back . . . Didn't the guy doing the microcode debugging sort of flip out in the midst of things, and disappear for a while? Yeah, it sounds like it can be rough!

While working for a defense contractor (~1980), I was supporting programmers working in JOVIAL on a computer described as "a ruggedized IBM 360 with indirect addressing added". I'd always assumed that the instruction set modifications were implemeted via microcode changes.

I appreciate your informed posts, and on behalf of at least one curious non-techie acquaintance, I thank you for taking the time to clarify this subject.

-- Paul B.

                                                
Re: Explanation of Microcode
Message #14 Posted by Wayne Brown on 29 Jan 2003, 7:46 a.m.,
in response to message #13 by Paul Brogger

The thing I remembered best about the microcode guy in Kidder's book was his very selective memory. He could keep thousands of lines of microcode in his head, but he had to keep his own phone number on a slip of paper in his desk because he could never remember it.

                                          
Small contribution...
Message #15 Posted by Andrés C. Rodríguez (Argentina) on 28 Jan 2003, 5:37 p.m.,
in response to message #12 by Bill Wiese

Just a small contribution to Bill's excellent posting:

Tracy Kidder's book was about the Data General Eclipse MV 8000 development, poised to compete against the DEC VAX. Its code name was Eagle.

The DG Nova 3 was a state-machine design (finite automata), the Nova 4 was a microprogrammed bit-slice design. The DG Eclipse was a microprogrammed CPU, in which the user could actually write new instructions by editing the Writable Control Store contents. It had a "microcode step" switch in the front panel (it had a panel with rocker lever switches and bit lights), to single step the microcode execution for testing and debugging purposes.

                                                
Re: I want one of those!
Message #16 Posted by Paul Brogger on 29 Jan 2003, 3:42 p.m.,
in response to message #15 by Andrés C. Rodríguez (Argentina)

That Eclipse thingie with the writeable control store and the switches and lights?

Now that sounds like a real machine!

Where do I get one?

(I'd like to write an HP-42s emulator for it . . . 8^)

                                                      
Re: I want one of those!
Message #17 Posted by Andrés C. Rodríguez (Argentina) on 29 Jan 2003, 5:39 p.m.,
in response to message #16 by Paul Brogger

Paul: you will need a 19 inch rack to mount the Eclipse CPU in, then you can add a 10 Megabytes hard disk (5 fixed + 5 removable)... The hard disk alone weighted around 150 lb. Back to 1980 times, my first (part-time) job was with these Eclipse and Nova CPUs here in Buenos Aires...

                                                
Re: Front Panel Switches & Lights
Message #18 Posted by Paul Brogger on 30 Jan 2003, 10:10 a.m.,
in response to message #15 by Andrés C. Rodríguez (Argentina)

I started computing in a college where the students' BASIC system ran on an HP-2000 -- I think it had switches, but it certainly had lights!

The most up-to-date piece of software was always the idle loop -- everyone was always coming up with a new way to make the pattern of lights dance rhythmically while the CPU was otherwise not busy.

My first electronics project was assembling the school's IMSAI 8080 (S-100 Bus) computer kit. The front panel had a long row of rocker switches and red LED's. Until the disk drive was operational, we'd key in a boot program one byte at a time (address and data in binary), and then push a button to set it in motion . . . Any mistake, and we'd have a "hung" CPU, requiring a re-start and byte-by-byte debugging.

(And worse than that, it was 10 miles through the snow to get there, uphill both ways, with no shoes and nothing but a cold bagel for lunch. Those young geeks don't appreciate how difficult real programming was in the good ol' days . . . )


[ Return to Index | Top of Index ]

Go back to the main exhibit hall