In August, IBM announced it had made an important breakthrough in a new field, known as quantum computing. Its engineers had produced in the laboratory a computer comprising five quantum bits, or "qubits," and used the device to solve a basic mathematical problem.
A humble beginning, to be sure — but you're bound to hear more such announcements in the future. Compared to quantum computers, the chips used today in everything from handhelds to mainframes look like something from the 1940s.
Back then, the heavyweight of computing machines — both figuratively and literally — was the ENIAC. Comprising some 18,000 vacuum tubes, ENIAC weighed in at roughly 30 tons of glass and metal, and could fill a three-bedroom apartment. Worse, ENIAC's tubes guzzled power and ran hot. With no cooling mechanism, the machine needed almost constant maintenance.
Computer science was testing the limits of vacuum-tube engineering. To be feasible, a more powerful machine would need to be simplified in design, even as its sophistication increased. Nonetheless, the editors of Popular Mechanics were optimistic. "Computers in the future may have only 1,000 vacuum tubes," they opined in the March 1949 issue, "and weigh only 1.5 tons."
But what Popular Mechanics had failed to grasp was the significance of William Shockley's invention fully two years earlier: the solid state transistor. By 1958, Jack Kilby at Texas Instruments would print several tiny transistors onto a single Integrated Circuit, and the floodgates of computer development would be opened.
In 1965, a guy named Gordon Moore would even go so far as to assert that the number of transistors science could fit onto an IC would actually double every 18 months. And just to prove himself right, he'd later have Intel, the company he co-founded, do precisely that, up to the present day.
So long as circuits get smaller, computers get more powerful. But today, as in the late 1940s, computer scientists are already beginning to see the brick wall at the end of the tunnel for today's technology.
The consensus among chip manufacturers is that miniaturization of ICs will reach its ultimate climax as early as 2020. By that time, reduced to the size of mere molecules, the transistors on printed circuits will be as small as they can get.
Keeping the pace of computer engineering steady will require a quantum leap in computer engineering, like the transistor in the 1950s and the IC in the 1960s. Scientists at IBM and elsewhere are betting that this next breakthrough will be quantum computing.
While today's fastest processor can trace a direct lineage to the engineering ideas behind the ENIAC, a quantum computer is something altogether different.
At its most basic level, quantum computing relies on the fact that electrons can spin only one of two ways: up or down. By applying the right amount of radio-wave energy, you can flip an electron so it spins the other direction.
These properties make electrons a nice subatomic substitute for the binary bits that form ordinary computers. Instead of the modern CPU's millions of transistors, a useful quantum system might consist of only a few dozen atoms.
But it's not just about size. Where this gets interesting is when you apply the wrong amount of energy to a particle: enough to affect it, but not enough to reverse its spin completely.
The particle then enters a theoretical state called a "super position," in which it's spinning neither up nor down. Essentially, it's doing both at the same time, via the mysteries of subatomic physics.
A quantum computer is a series of such super-positioned electrons, aligned to represent a large set of data. If you keep tickling a quantum computer with radio waves, you can perform operations on its electrons much like the logic gates of a traditional computer.
But how can these operations be meaningful, if the bits are both on and off? How can you ever extract a result? The answer lies in the fundamental rules of quantum mechanics.
The first rule says that when you observe a super-positioned particle, it will "make up its mind," appearing to spin either up or down — on or off.
The second rule states that any time you gain information about even the smallest part of a quantum system, the rest of the system instantly adjusts itself to be consistent with that information.
So, imagine a series of super-positioned electrons that represents a vast multitude of computational possibilities. By applying radio waves in very specific patterns on the series, you coax the "bits" into a state that represents the answer you're looking for.
Then, observe just one of them, and because of the laws of quantum mechanics, every particle in the system instantaneously switches to either an on or off state — leaving your answer.
The miracle of quantum computing is that you've performed your calculations on every possible variation of the data set at the same time, without ever returning any solution but the correct one.
The catch is that you can't check your result halfway through the process. You can only observe it at the end, or else you destroy the super position and your quantum operations cease.
It's one of those insane things about quantum mechanics. You're talking about a bunch of electrons that are doing something that they shouldn't be doing, according to established physical law.
As soon as you come around the corner and take a peek at what they're doing, they revert to a quantifiable state that is in keeping with established physical law — almost as though they don't want to get caught.
So the one thing you never want to do with a quantum processor while it's processing is ask it how it's doing. As soon as you do that, you lose the quality of operating on (two to the power of x where x is the number of qubits in your computer) data states, and you revert to just one data state.
If that single data state isn't the answer to the problem you were looking for, then you've blown it. You have to be certain you will have the correct answer before you check to see what it is. Like most miracles, quantum computers require faith.
This problem necessitates a whole new approach to software design, as well as computer engineering. To get an idea how it's done, it might help to remember Rubik's Cube.
Superficially, solving the Cube is a matter of aligning squares of different colors, until the colors match on all six faces of the Cube.
The key, then, would appear to be in watching the colors as they move, and keeping squares of the same color grouped, until you can get all the like-colored squares adjacent. As millions of Cube puzzlers can testify, this task is difficult, and can take many hundreds of moves — if you're successful at all.
Pick up any book on beating the Cube, though, and you'll see that the real trick has very little to do with matching colors. Efficiently solving the puzzle involves making specific types of moves in the correct sequence.
Armed with the right formula, you need only observe the colors on the Cube a few times, instead of watching them with every move. In fact, some Cube masters claim they're able to solve a random Cube with their eyes closed.
Of course, this is exactly the approach that's necessary for quantum algorithms. Every time you observe the state of a quantum system, you effectively destroy it.
What's needed, then, is to figure out the correct series of operations that will yield the result you want, without looking at the contents of the system along the way. Once you've gotten the method down, by the end of the process your result should appear almost magically — like solving Rubik's Cube.
All this is of course a massive simplification. A real understanding of how quantum computers work would require a degree in particle physics. Though what's interesting to note, perhaps, is that an understanding is all anyone can get — because useful quantum computers don't yet exist.
IBM's five-bit demonstration is little more than a research tool, incapable of any meaningful problem solving. In effect, Lov Grover, Peter Shor and the other scientists who are busy designing algorithms for quantum computers are writing the solution book for a Rubik's Cube that hasn't been built.
Don't bet you won't see real quantum computers in your lifetime, however. 2020 is fast approaching. And besides, the applications for massive simultaneous calculations are too enticing — ranging from cracking encryption to modeling global weather patterns.
Still, most scientists agree a useful model is likely many decades away. As physicist and quantum computer engineer Raymond LaFlamme puts it: "On my pessimistic days, I think quantum computing is crazy."