How do computers work?

A STAFF REPORT FROM THE STRAIGHT DOPE SCIENCE ADVISORY BOARD

Dear Straight Dope: OK, after semi-exhaustive searching around your site, I have to ask this nagging question. How the heck do computers work? I know, this sounds like a simple question. But I find it hard to fathom millions upon millions of 1’s and 0’s floating around my processor every second that it’s turned on. How can a bunch of electrical currents run a highly complex game such as Quake2? I can’t comprehend it. Please, make me understand! Josh Eikenberry

Karl replies:

It’s like this. Nobody knows. No one body that is. Except maybe Cecil. It’s all a matter of what we in the biz like to call “layers of abstraction.”

Here’s how it works. Somebody builds a machine. A machine that can be programmed–that is, that can perform a series of simple steps, remember the results, make yes/no decisions based on some external stimulus, perform or not perform steps based on the results of the yes/no decisions, and skip either forward or backwards in the sequence of steps and keep going from that new spot. Once somebody built a machine that could do all that, somebody else came along and programmed it to behave like a similar, but more complex machine. The process repeats until what you see on your desk is a machine that’s good for nothing but displaying the guts of the bad guys spattered all over a virtual wall.

Still don’t get it? I don’t blame you. What you really need to know is how the bottom layer, the hardware, works. You can just imagine the rest–tireless programmers slaving in windowless rooms, producing programming languages that other programmers will use to write better programming languages until finally the ultimate programming tool is produced: MicroSoft Excel. (Which, of course, is then used to write Quake2.)

At the bottom are the electronics. Basically transistors. Lots of ’em. A transistor is a little doodad that has three wires coming out of it. The collector, the emitter, and the base. Apply the appropriate voltage to the base, and current will flow from the collector to the emitter. Withdraw the voltage, and the flow stops. So a transistor can be thought of as a little switch.

Next, you connect transistors together to make what are called logic gates. These gates have a power wire, which we can ignore; one or more input wires that carry signals; and an output wire that sends out a result. The signals that come in and go out of the gates are either “yes,” there is a voltage on the wire, or “no,” there is no voltage. These voltages or lack thereof are the ones and zeros floating around in there. Each wire carries 1 bit, either a 1 or a 0.

Let’s construct a simple logic gate from a transistor–a NOT gate, which has one input and one output. A NOT gate is sometimes called an inverter, because when a 1 comes in, a 0 goes out, and when a 0 goes in, a 1 comes out. This is the operation logicians call “not,” hence the name. To make a NOT gate, take a single transistor. Connect the base to the “in” wire, the collector to the “out” wire, and the emitter to ground. (Ground is a wire with zero voltage that sucks up all the electrons you can put into it, and so stays at zero volts.) Here it is, it’s hard to make sense of it without a drawing:

Power

|

|

|———o out

Collector

in o—–Base

Emitter

|

|

|

Ground

Figure 1: Constructing a NOT gate from a transistor

Now, watch what happens when we apply a voltage to “in.” (The whole thing burns up because we’ve short-circuited power directly to ground without any resistors in between. But we won’t talk about that.) The transistor is turned on, and all the power passes through the transistor to ground. The “out” is left at zero volts. So, when a 1 comes in, a 0 goes out. Contrary-wise, with no voltage on the “in,” the transistor is turned off, and there is power on the “out.” When a 0 comes in, a 1 goes out.

Still with me? Fine, next we’ll find out how to use transistors to compare the signals on two wires.

With 4 transistors you can make a NAND logic gate that has 2 inputs and 1 output. That’s a gate that outputs a 1, unless both inputs are 1, in which case it outputs a 0. (NAND stands for Not AND. It’s a logical AND–only output a 1 when both the first and the second input have a 1–followed by a logical NOT–always output the opposite of what’s put in.)

With a NAND gate we’re getting somewhere. We’ve got a gate that can compare two signals and tell us about their relationship. In fact, with just a NAND gate and a NOT gate, you can construct all the other logical comparisons you want: AND gates, OR gates (a 1 on either the first or the second input produces a 1 on the output), NOR gates (nd OR followed by a NOT), and an XOR gate (eXclusive OR–a 1 on either the first or the second input, but not both, produces a 1 on the output wire.) NAND gates are just so exciting! Just wait ’till you see what we’ll do with them later.

Now that you can make comparisons between signals, you can actually do useful things. Suppose you want to add two binary numbers. (OK, suppose I want to make you add two binary numbers.) Let’s keep it simple and have each input number be only 1 bit. Here are the possibilities:

first number 0 1 0 1

second number + 0 + 0 + 1 + 1

— — — —

total 0 1 1 10

Figure 2: Binary addition of two 1 bit numbers

That last one might be tricky. “10" doesn’t mean “the number between 9 and 11"–decimal 10, in other words. It means binary 10, the equivalent of decimal 2. I’m going to assume you understand what binary numbers are. Don’t disillusion me.

Now check out this next diagram. It’s a description of the XOR operation.

first input wire 0 1 0 1

second input wire 0 0 1 1

— — — —

output wire 0 1 1 0

Figure 3: Operation of an XOR logic gate

See? A one on either input, but not both, produces a 1 on the output. Does this diagram look familiar? Yup. It’s just like the addition diagram, except that we only get the the rightmost column of the total on the output wire. After all, we only have one output wire. Ta-da! We have added two binary numbers! We didn’t do a “carry” into the next column, but we could add more logic gates, and another output wire, to do it. (Don’t know what I mean by “carry?” Remember grade school again, where you learned to “carry” your addition into the 10’s column? That’s the same “carry.”)

Still not impressed? Take my word for it. Being able to compare signals is what makes simple little steps like addition possible. And once you can add, you can add 1 to the number called the “program counter.” The program counter is what keeps track of which program instruction the CPU should do. Add one to the program counter, and you do the next instruction in the program’s sequence of instructions.

There are other basic things a program needs to do. By subtracting one number from another and then comparing the result to 0, you can tell if the two numbers are the same. This comparison trick works for more than numbers. After all, the letters are just patterns of bits in the computer. A particular pattern means “A,” another pattern means “a,” etc. The patterns that are letters are series of bits no different from numbers, and so can be subtracted from each other to do comparisons of letters. And don’t forget, you can build more complex operations out of simple steps. For instance, repeatedly add to perform multiplication (grade school again: 3 x 2 = 2 + 2 + 2).

Now, it’s possible to get clever and make a multiplication operator straight out of logic gates. But that’s beside the point. You can always take a more complex operation and use clever combinations of much lower level pieces instead. The point is that all computer programs are built out of small pieces, like multiplication, which when you examine more closely, you see are made up of more small pieces, like addition. Nobody could construct a program like Quake2 by hand out of logic gates, it’s the hiding of the details of each step, the layers of abstraction, that make it possible to write the programs we use. Keep picking program operations apart and eventually you get down to logic gates. (No no no. Not Bill Gates, logic gates.)

So, the CPU is made up of all these interconnected logic gates, plus the program counter. The gates can do comparisons to recognize particular patterns of bits as particular instructions. They can do comparisons of data and other operations. The program counter can be incremented to go to the next instruction so that instructions can be executed in sequence. And the program counter can be changed, in response to data comparisons even. So we can skip over or redo a sequence of instructions. What’s left? Memory and clocking.

Your computer has a “clock speed,” 300 megahertz or whatever. Why?

Because it takes time for the electrons to run through these logic gates. Suppose you have two signals, and, because you are executing a particular instruction, the first signal runs through lots of logic gates before it’s done being processed but the second signal runs through hardly any. The second signal is going to be ready before the first signal. You want to wait for the first signal to finish before you feed both the first and the second signal into the next logic gate. That’s what the clock is for. At particular points in the CPU the signals are held until the clock ticks again. The gates then have time to finish and all come up with an answer before the processing continues. The faster the clock, the faster the information moves from one step to the next through, and into, and out of, the CPU. Depending on the design of the CPU, and the particular instruction being executed, it can take several ticks of the clock for the CPU to execute one instruction.

Now for memory. We’re not talking disk or tape memory here, that’s way too slow for the CPU (and anyway works more or less like your boring old audio tape). We’re talking real solid-state RAM memory! We’re talking chips and SIMMS and such-like! (Stop now and whack yourself on the chest in true manly style.) For that matter, we’re talking about the CPU’s program counter too. The program counter is just another bit of memory with special importance, it remembers what step the program is on.

Check this out, you can make a bit (literally one bit–har!) of memory out of two NAND gates. You take the output of each NAND gate and use it as one of the inputs to the other NAND gate, like so:

Input side Output side

——-

Set o———-| |

| NAND |—-/—o Output

—| | /

——- /

/

/

/ <– Wires cross but don’t touch.

/

/ ——-

—| |

| NAND |——-o (NOT of Output)

Reset o———-| |

——-

Figure 4: Constructing a flip-flop from 2 NAND gates

This device is called a flip-flop (or bistable multi-vibrator). I’ll let you warp your mind thinking about why it works, here’s how to use it: Put a 0 on the Set input, the Output becomes 1. Even after the Set input goes back to 1, the Output will remain 1. Put a 0 on the Reset input and the Output becomes 0. Even after Reset is returned to 1, Output will remain 0. What’s going on? The Output is remembering what it was told to be–and it stays as it was told. Ta-da! This circuit has memory! (By the way, you don’t make both Set and Reset 0 at the same time.) Make a bunch of them on one chip and you’ve got a bunch of bits of memory. You can use the Set and Reset input lines to tell the memory what to remember. (There are other ways to construct memory than to use transistors, but none are so much fun.)

There ya go. You can now build your own CPU and RAM memory with nothing but some transistors, bits of wire, and one of those clock thingies, and from there it’s but a short step to Quake2. Aren’t you happy you asked?

Karl

Send questions to Cecil via cecil@straightdope.com.

STAFF REPORTS ARE WRITTEN BY THE STRAIGHT DOPE SCIENCE ADVISORY BOARD, CECIL’S ONLINE AUXILIARY. THOUGH THE SDSAB DOES ITS BEST, THESE COLUMNS ARE EDITED BY ED ZOTTI, NOT CECIL, SO ACCURACYWISE YOU’D BETTER KEEP YOUR FINGERS CROSSED.