Wednesday, August 2, 2017

Notes on a Cosmology - Part 10, Quantum Computation

In Part 6, we introduced the theory of computation which we will call classical computation. In this post, we are going to look at the theory of quantum computation.

What a quantum computer is

It can be difficult to visualize exactly what we mean when we talk about "a quantum computer". It is probably closer to the truth to think of "a quantum experiment" instead of "a quantum computer." Think lasers on a laboratory bench. Here are the building blocks that any quantum computer must have[6]:
  1. We need one (or preferably more) physical qubits that can be prepared with quantum states (e.g. nitrogen-vacancy qubits in a diamond substrate)
  2. We need to keep decoherence low, that is, we must isolate the quantum system from external noise. Any interaction with the external environment entangles the qubit with its surroundings, causing it to "decohere"
  3. We need to set up universal quantum gate operations. These can be sequenced by hand or by computer. These are part of the state preparation.
  4. We must initialize (prepare) the qubits to an initial quantum state
  5. We must perform single-qubit measurements. Ideally, the result follows the dynamics of the quantum circuit we encoded in step 3 (and prepared on the physical qubits in step 4).
We must iterate the above process (at least steps 4 and 5) to get a classical probability distribution on the measured state. The result is inferred from this distribution, not from any single measurement of the quantum computation. This is one of the reasons to think of QC as being like a noisy communication channel. This process of iteration and inference of a result from the distribution counts as a single "step" of the quantum computer.

We may also need to iterate on a particular problem instance. That is, a particular problem instance may require multiple steps of the quantum computer in order to reach a solution. Specifically, we mean that the result of one step is used to calculate the preparation of the initial state of the next step. For certain problems, a quantum computer will require quadratically or even exponentially fewer steps than the equivalent algorithm run on a classical computer.

Measurement is always classical, but measurement is also a quantum operator. Specifically, it is a quantum operator whose result is guaranteed to be classical (real). Results of computation are inferred from classical measurements. The wave equation gives probability amplitude, which is a complex value (thus, not a probability, per se). The norm of the amplitude gives the classical probability governing the observable of interest (momentum, position, spin, etc.) Quantum operators compose under unitary dynamics - the unitary operator is a complex multiplication of the quantum state, normed to 1. These operators can be visualized as operations on state vectors on the Bloch sphere:

Fig. 1 - The Bloch sphere

What makes the qubit model so useful for quantum computation is that applying a sequence of quantum gates is equivalent to performing a sequence of rotations of the quantum state around the surface of the Bloch sphere. In other words, since we are guaranteed to never leave the sphere, we are free to combine quantum operations in any way we please. A quantum operator ("quantum logic gate") exists as a point on the Bloch sphere. Applying the operator to a quantum state or even another quantum operator is equivalent to a rotation on the sphere. When trying to envision a quantum algorithm, it's important to remember that "quantum logic gates" do not physically exist apart from the quantum state. The only actual hardware in a physical quantum computer is the qubits themselves! This fact should not be invested with any mystical connotation. The preparation of a quantum state is no different, in principle, than the encoding of a software program - which is just a pattern of electromagnetic states - onto a computer disk in preparation for a classical computation. The difference is that the hardware we are running on is the quantum bits themselves. We are literally computing on the laws of quantum physics.

Classical versus quantum computation
It always bothers me that according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space and no matter how tiny a region of time … I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed and the laws will turn out to be simple, like the chequer board with all its apparent complexities. But this speculation is of the same nature as those other people make—“I like it”,“I don't like it”—and it is not good to be too prejudiced about these things. - Richard Feynman, The Character of Physical Law (1965)
Quantum computing is widely misunderstood to be a fundamentally different form of computing from anything else. This is incorrect. Since we live in a quantum world, a classical computer can be thought of as a really inefficient quantum computer[1]. To see why, let us consider a simple example of reckoning-based computation. Let us say we want to find the square root of 38,349. If you have ever worked through the square-root procedure, you know that it can be long and tedious. Today, we can just grab a digital calculator, but it wasn't so easy even just 70 years ago. But let us say that you just happen to have 38,349 pennies on hand. One method would be to pour out all the pennies on a flat surface (large table or floor) and spread them out evenly until they make a square (roughly). You can check the diagonals with a tape-measure -- when they are equal and your sides are straight, the pennies are square. As long as every penny is lying flat on the floor in a quadrille pattern (not hexagonally packed), you can find the square root simply by counting off the number of pennies on one side of the square. That's the definition of what a square-root is. Of course, unless you are very careful, your whole-number result will not be exactly correct and you would have to take additional steps to get the fractional part of the square root (the part after the decimal point). But the point remains, you have just performed an approximate square-root operation -- in effectively a single step -- that would otherwise have been quite tedious on pen and paper.

This kind of computing is called analog computing and different versions of it were used by the Navy for calculating ballistics trajectories, navigation and other tasks. It can be argued that the most widely-used analog computer was the trusty slide-rule. Unlike a digital computer, the slide-rule directly calculates a result in as few as two steps. If you know what you're doing, you can even use it to take the square-root of 38,349 without too much difficulty.

When you power up a quantum computer and perform some calculations with it, the results can only be read out by looking at the numbers on the computer's output screen (or, reading them digitally and then displaying them). The act of measuring (with your eyes), according to quantum theory itself, converts any quantum phenomena that may have occurred in the course of the quantum computation into some classical (real) value. In short, every quantum computer is also an analog computer. It might be very efficient for certain types of specialized tasks (for example, simulating quantum physics), but it is nothing more than a very expensive analog computer.

The theory of quantum computation does not change any of the results we have developed on the basis of classical computation that derive from computability theory - there is nothing that a quantum computer can compute but that a classical computer cannot compute. Conversely, all the problems that are unsolvable for a classical computer are also unsolvable for a quantum computer. The distinction between quantum computing and classical computing has to do with computational efficiency on certain classes of problems, especially those problems that are related to computing the evolution of physical laws. It is widely believed among experts in this area is that classical computers will never be able to simulate the evolution of physical laws as efficiently as quantum computers.

Quantum Supremacy

Quantum computation was predicted in theory and later confirmed in the laboratory. Today, there is an extensive body of theory for quantum computation, including a growing library of quantum algorithms for solving useful problems that are hard for classical computers (for example, search over an unsorted collection of elements).

To formalize the advantages of quantum computation over classical computation, researchers have developed a concept called quantum supremacy (QS). The exact definition is rather technical but the basic idea is that QS is achieved when a quantum computer solves an instance of a problem that, when extended to more and more qubits, rapidly becomes intractable for any classical computer.

There is no known physical law that prevents us from achieving QS and the theory of quantum computing suggests that an ideal quantum computer would be able to achieve absolute quantum supremacy with a mere hundreds of qubits. Note that it is not possible to actually build an ideal QC because this entails perfect noise-isolation (at a temperature very close to absolute zero). Despite the fact that quantum computers leverage quantum laws to achieve parallelism, any given quantum computer can be simulated by a sufficiently large parallel computer. This means that the race between quantum and classical computing for large-scale computation (called High-Performance Computing or HPC) is really an economics, (physical) power and materials-sciences race. Quantum theory will allow you to build a quantum computer with the power of billions of state-of-the-art supercomputers (or more). The real question is not whether it's possible but whether it will cost billions of times as much to construct such a real, functioning quantum computer at that scale.

A quantum computer is actually a type of analog computer and both analog computers and quantum computers are categorized as approximate computers, as opposed to digital computers which are exact computers. You can perform exact computation with an approximate computer by repeating the approximate computation many times and finding the center of the statistical distribution on the output of the approximate computer. The more times you run it, the more exact you can be. The mathematics describing how to do this is very well-developed. All forms of approximate computing are enormously more power-efficient than exact computing because exact computing forces you to expend a lot of power excluding noise from your system so that you never get a wrong answer.[2]

Quantum physics might be strange but the mathematics that describes the internal quantum weirdness of quantum computers is actually pretty straightforward, boring even. Quantum computation, which relies on some quantum effects, is not nearly as spooky or mysterious as quantum mechanics itself. If you banish all thought of "what is really happening inside a quantum computer?" and, instead, just focus on characterizing its input-output responses (the same as any other physical system), you can build a pretty straightforward mathematical framework that allows you to perform any quantum computation you like with nothing more mysterious than the complex numbers and matrix multiplication. In fact, it is sufficiently straightforward that ordinary computers have no trouble at all simulating small quantum computers. So, even though the quantum world is strange, and even though quantum computers are capable, in principle, of operating at scales that are simply not attainable for classical computers, the fact remains that quantum computing is not magic.

Quantum Computing Misconceptions

Most modern encryption systems are not information-theoretically secure, meaning, it is possible, in principle, to crack most codes with a sufficiently large computer[3]. Of course, for state of the art codes like AES, "sufficiently large" is a euphemism for a computer far larger than all computing power (including cryptocurrency mining) on the planet, combined. Part of the excitement about the possibility of at-scale quantum computation is demonstrated by the fact that moderate-scale quantum computers would be able to crack the toughest codes that have been developed over the last few decades. Contrary to the popular press, quantum computing does not spell doom for encryption, it just means we will need to upgrade our encryption algorithms to quantum-resistant encryption algorithms. But the fact remains that it is remarkable that the laws of physics permit us to build a computer that leverages quantum physics -- and which could probably fit in a warehouse -- with enough computing power to outstrip all the existing computers, as we know them, in the world today.

Despite their tremendous potential advantage over ordinary computers, there are some terrible misconceptions in the popular mind about the difference between ordinary computers and quantum computers. Let's look at two of these misconceptions and clear them up.

The first misconception can be expressed in the following parallelism:

quantum computation : classical computation 
:: 
quantum mechanics : classical mechanics

That is, "Quantum computation is to classical computation as quantum mechanics is to classical mechanics."

As we all know, classical mechanics was actually refuted by the advent of quantum mechanics, at least, at small scales. There can be no doubt that we live in a fundamentally quantum world and that classical mechanics is an approximation of how large physical systems behave, that is, physical systems consisting of very large numbers of elementary particles. But like all approximations, classical mechanics breaks down at sufficiently high resolution. When you make very exact measurements (or measure macroscopic effects that amplify very small-scale effects), the classical theories of physics begin to break down and give completely wrong answers.

The relationship between quantum computation and classical computation is nothing like this. Classical computation is not an approximate theory of computation at macroscopic scales that was later revised by quantum theory for computation at very small scales. Rather, classical computation is an abstract theory that is correct for any classical observer, no matter what kind of universe he or she happens to inhabit (classical, quantum or something stranger). Quantum computation does not refute or even revise classical computation, it extends it. This is a hugely important distinction because the dominant narrative in the public mind, right now, is this idea that quantum computers and classical computers are locked in some kind of existential struggle, with the old, outdated classical computers that rely on busted classical physics soon to be rendered obsolete by the mighty quantum computer with its quantum weirdness.

The second misconception, which is connected to the first misconception, is that quantum computers can solve problems that classical computers cannot solve, even in principle. These two misconceptions obviously work hand-in-hand. The correct understanding is this: quantum computers efficiently solve problems that we believe classical computers cannot efficiently solve. But, given enough time and enough memory, any classical computer can solve any problem that a quantum computer can solve. This is in strong contrast to the relation between classical mechanics and quantum mechanics where the two theories rapidly diverge and only quantum mechanics gives correct answers that agree with laboratory experiment, while classical mechanics can only give wrong answers.

Quantum Simulation

I'm studying the theory of time-frequency analysis in my spare time and here's a quote from Foundations of Time-Frequency Analysis by Karlheinz Groechenig,
Quantum mechanics and signal analysis share a common mathematical framework. The analogy between "time-frequency" and "position-momentum" leads to many similarities between signal analysis and quantum mechanics despite the rather counterintuitive aspects of quantum mechanics. It is therefore often instructive and enriching to study a problem in time-frequency analysis from the point of view of quantum mechanics. Throughout this book we will make an occasional reference to quantum mechanics and use it as a second source of motivation and interpretation for the mathematical theory. Many mathematical structures of time-frequency analysis go back to the origins of quantum mechanics and to the quest for its precise mathematical foundation. For example, the Wigner distribution (Chapter 4.3) was introduced to analyze the joint phase space distribution of position and momentum [253] - joint time-frequency distribution in signal analysis -- and the Weyl calculus of pseudodifferential operators (Chapter 14.3) emerges from the quantization problem [250]. The series expansions that are nowadays called Gabor expansions (Chapters 5-7, 12, 13) were suggested by J. von Neumann in an attempt to formalize the quantum mechanical measurement process [206]. (Section 2.4 Quantum Mechanics and the Uncertainty Principle)
It is hard to over-emphasize the significance of that first sentence -- "quantum mechanics and signal analysis share a common mathematical framework." The same mathematics that describes the behavior of signals in the time-frequency domain can also be used to describe the behavior of quantum mechanical systems. When thinking about quantum systems in respect to their mechanical effects, this might seem to be a source of paradox. Signals, after all, are ethereal things, like words. Quantum particles, however, are as real and mechanical as a fist. But when thinking about quantum systems in respect to their computational effects, however, there should be no mystery. We are free to think of a quantum computer as a very ordinary piece of equipment, no more mysterious than a cellphone modem.

So, this brings me to the topic of quantum simulation. Since a classical computer is capable, in principle, of solving any problem that a quantum computer can solve (it just might require the lifetime of the universe, many times over, to complete), this means that it is possible to build quantum simulators. A quantum simulator is a classical computer that simulates a (small) quantum computer. Many quantum computer simulation packages and languages already exist (Q#, PyQuil, QISKit, and many more). We already know about how much you can scale up these kinds of simulators on standard computers, and it is not very far. If you are willing to lay down some serious cash to rent a significant amount of compute from Amazon Web Services (AWS), you could simulate maybe 70 qubits (see here for one approach). Even though QC is very powerful, 70 qubits is not enough to solve any significant problems. You need at least several hundred ideal qubits[4] before you can tackle very hard problems like cracking state-of-the-art codes, and so on. In terms of real (non-ideal) qubits, some estimates run into the millions in order to build at-scale quantum computers capable of achieving the dream of unrestricted quantum computing.

But signal theory tells us that we are thinking about the whole problem of simulating quantum computation incorrectly. Of course you can't scale up standard computers to simulate quantum computers. Standard computers are exact (noiseless) computing machines but quantum computers are stochastic (that is, intrinsically noisy) machines! You have to use a completely different approach if you want to efficiently simulate quantum computing at-scale. Specifically, you need to utilize approximate computing methods.

One such approximate computing method has already been demonstrated by researchers at Purdue University -- Researchers Demonstrate First ‘Poor Man’s Qubit’ Probabilistic Computing Hardware. This approach to simulating qubits is built on the common mathematical foundation shared by signal analysis and quantum computation. The practical significance of this research is that previous estimates about how powerful non-quantum computation can be for quantum simulation are wrong, drastically wrong. While your standard CPU or GPU is pathetic at quantum simulation, it doesn't follow that every non-quantum computer that can be built will be pathetic at quantum simulation. This is a field of ongoing research, so the judges are still out, but it may very well be possible to efficiently simulate certain kinds of quantum computation using non-quantum computers[5].

Next: Part 11, Universal Search

---

[1] For the quantum computing nerd, imagine taking your favorite quantum algorithm and inserting measurement gates between every pair of connected quantum gates in your quantum algorithm. Obviously, this destroys all the quantum properties that make quantum computing more powerful than classical computing. More importantly, it shows that a classical computer can be described in terms of quantum computation quite naturally.

[2] This difference is one of the issues I take with the current state of the "quantum vs. classical computing" horse-race. It's not an apples-to-apples comparison, even in principle. We have approximate-and-classical computers (they are often used in signal-processing and similar tasks) and these computers can perform mathematical operations at power profiles that are thousands, even millions of times lower than the amount of power used by exact, digital computers that power all the big super-computers. We should really be comparing quantum computers to the cost of manufacturing (and operating) massively-parallel approximate computers, not massively-parallel exact computers

[3] There exist codes that cannot be cracked, even by a quantum computer or even an infinite computer, for that matter.

[4] "Ideal" qubits are in contrast to real qubits which have highly variable quality. This is yet another popular misconception -- that all qubits are created equal (since all bits, to which qubits are usually compared, are more or less created equal). In reality, real qubits are far from ideal and there are physical constraints that make it impossible to fabricate ideal qubits. Even worse, as the number of qubits in a real quantum computer increases, the further its qubits are from being ideal. This doesn't make QC impossible, it just makes it a lot harder to achieve than you are led to believe from the pop-sci press.

[5] See here and here for more information along this line.

[6] Notes on Quantum Computing [PDF], Marcus Kuhn

No comments:

Post a Comment

Wave-Particle Duality Because Why?

We know from experimental observation that particles and waves are fundamentally interchangeable and that the most basic building-blocks of ...