Tock Tick Time

History of the Microprocessor

Since my childhood, I had been interested in electronics.  I had an uncle who owned a television repair business.  Through his, and my father’s encouragement I dabbled in electronics through the use of vacuum tube projects, with little success but with a lot of excitement and consequential learning.  Just the mention of vacuum tubes should date me with the reader.  I am now in my 80’s.
As I moved out of my teens I moved on to college, eventually getting a master’s degree in Computer Science, during those early days of room-sized computers whose power could easily fit into today’s musical birthday cards.  This was during the 1960s when transistors had overtaken vacuum tubes some decade earlier as an element of electronics, and the industry had moved on to integrated circuits; the packing of a bunch of transistors and associated components into a single chip which represented a function instead of a single element.  These functions could represent analog functions such as voltage or amperage control, or digital, where the function would represent logical decisions such as comparing the existence of voltages on multiple inputs and outputting a single “yes/no” voltage, based upon the comparison.  These digital integrated circuits were the basis of the first “less than room-sized computers,” some like the IBM 1620 that I programmed at Purdue University in 1966, which was “only” the size of a large office desk.
In 1971 the world changed, but relatively few noticed.  I company called Intel introduced the 4004, the first microprocessor, essentially the next step in organizational chip complexity; first the transistor, then the digital integrated circuit, and now the microprocessor.  The microprocessor had built into it the capability of merging multiple “yes/no” circuits into functions such as addition and subtraction and comparisons, or decision-making, at a much high level.  A person could now direct (“program”) the microprocessor to respond based upon inputs instead of having to wire a group of integrated chips together.  Now the “wiring” was done through feeding the chip instructions.  No more soldering.
A person (“programmer”) could instruct the chip to output a “1” if A and B were both true, and a “0” if either were false.  We need to keep in mind that this was the way that programmers had been instructing computers since the early 1950s but the thing that was different with the microprocessor was that this programming could be done at your kitchen table, with the computer being on a small printed circuit board sitting in front of you and powered by a 9-volt battery!