|Re: Whither an explanation of 'microcode'?|
Message #12 Posted by Bill Wiese on 27 Jan 2003, 6:09 p.m.,
in response to message #11 by Paul Brogger
OK, here, I'll try again.
In general, microprocessors, when viewed by the world outside of the chip, execute 'instructions'. Each instruction only does a fairly simple task: add two or three numbers together, test one number against another,
get a value from memory, store a value to memory, change program flow via a jump to a new address, call a subroutine, etc. Each instruction consists of a minimum of an 'opcode' and perhaps one or more 'operands' - values that can give more info to an opcode describing a target address, a register or registers to work on, etc. Sometimes a CPU instruction - while conceptually quite simple - may have a degree of complexity with variations like indexed+offset addressing for arrays, etc.
But something must 'tell' the CPU 'how' to do things. While it is true an adder can add, and that a memory fetch means just putting an address on the bus and setting some control lines, there are lotsa little steps involved in doing each instruction.
The CPU has lots of units that need to be enabled or disabled for given operations, for example. These need to happen in parallel just for sanity's sake - and for speed.
But a complex instruction might need to perform a couple of additions to form proper memory addresses for array indexing before an actual add or fetch or write is completed. These are serial dependencies.
So we have a collection of 'states' that are required for the CPU to do its thing. Some of these can happen all at once (parallel) and some must happen in sequence. These can be stored as bits in a wide microword or microinstruction in the CPU's microcode. Bits within this microinstruction enable/disable sections of the CPU, select the arithmetic operation to be performed, etc. If several steps that are dependent upon each other need to be performed in sequence, then several microinstructions need to be performed.
Microcode is kinda the 'DNA' of a processor. A sufficiently universal hunk of hardware could readily be a 486 or a 68000 or a PowerPC thru different microprograms.
(This not cost effective now since it uses extra silicon. But a few yrs ago, Edge Computing had a chip that could run 386 or 68K code and wanted to run Mac & PC software on same machine with just one CPU. Cool but the market wasn't ready, etc.)
Microcode memory is sometimes known from the old mainframe days as 'control store': the storage of info that makes the processor behave the way it does in relation to the outside world (memory, peripherals, software, etc.)
A microcoded CPU takes an instruction and uses that to figure out where in the microcode it's supposed to execute. The microcode runs thru a few steps, the CPU does its thing, and then the microcode is ready to fetch the next instruction. The microcode/microprogram is in many ways really a smaller (well, tiny) program inside the CPU whose job it is to give the CPU its outward personality and define exactly how its instructions that are viewable to the outside world - thru applications & system programmers - operate.
Very few programmers ever touch microcode. It's burned into the CPU in its own ROM and never changes (unless CPU company releases a new processor version with update or bugfix.) [Since this ROM has some uniqueness to its patterns of 0s and 1s it can be sometimes be 'compressed' using some logic minimization techniques into a PLA (programmed logic array). This was done for speedup in earlier minicomputer days.]
Even though what microcode does is fundamentally simple, it can be a brain-frying programming episode to write in microcode. There is much to manage inside a complex CPU and complex side-effects must be avoided (say, internal bus contentions). Tracy Kidder's book about the Data General Nova, "Soul of a New Machine" did a pretty good nontechie job of covering some issues about microcode writing.
Lately, CPUs have become complex enough that bugs creep in. Intel Pentium IIIs and above, IIRC, have some RAM area inside for a microcode update to fix any bugs in any of the CPU's instructions. This would've been expensive in earlier days of mass-market microprocessors.
Mainframes and minicomputers often had similar 'writeable control store' in addition (or instead of) ROM-based microcode. Rather than for bugfixes, this allowed really skilled programmers to create one or more special-purpose instructions to optimize an application program for speed (or even perhaps memory use). I know several later midrange & up VAXes had this feature.
Many processors now do NOT use microcode - in particular, RISC processors like Sun Sparc, ARM, MIPS, etc. RISC CPUs in general have simpler instructions than their complex 'CISC' brethren. (RISC=reduced instruction set computer, CISC=complex instruction set computer) but often take more of these instructions to do the same job as one CISC instruction. Since each instruction is simple, the opcode and operands flow straight to the CPU hardware, and results straight out. Reduced complexity in theory offers simplicity of CPU design and layout and in theory higher speed. However RISC and CISC CPU designs are merging closer to each other: late Pentiums, for example, have an internal simple blazingly fast RISC microengine for many operations. Some later Sparcs, PowerPCs, etc. have some complex hardware-assisted instructions (though they are not microcoded).
Microcode was originated 40-50yrs ago by Maurice Wilkes so there was a standardized way of describing (and then programming) a CPUs internal operation and its instruction set - rather than pulling one's hair out with random logic designs. It also - because it was small and at right at the core of the CPU - was made from expensive fast memory, which was OK because not much of it was needed (esp in relation to system memory sizes.) Using complex instructions allowed fewer instructions to do more, esp important when instructions were fetched from slow main memory: microcode memory might've been 10X faster than main memory. (Differences are much smaller now.)
Several semiconductor companies used to sell 'bit slice' components with which one could make a custom microcoded board-level CPU. This was quite common for military, etc applications involving DSP (digital signal processing), special high-speed controllers for disk drives, etc. As regular CPU speeds soared in the 80s, DSP chips came about, etc. the need for this was vastly reduced - although people ARE still building their own custom CPUs on FPGA (field programmable gate array) chips for special applications.
San Jose, CA