- What is computation, in the most general sense?
- Are there limits to computation, that is, to what can be computed?
- Are there physical resource bounds to computation, if so, what are they?
What is computation?
The computers we use today are digital or discrete. If you look at the screen of your smartphone or tablet under a microscope, you will see that it is made up of many, discrete dots of light. This property of discreteness stands in contrast to the continuous nature of real existence, as far as we can perceive it. Computation, in the most general sense, is not necessarily discrete. Analog computers predate digital computers and were used to calculate special-purpose problems, most notably, differential equations suitable for solving the motion of projectile bodies. The analog-versus-digital distinction is a matter of implementation. After all, a correctly functioning analog computer and a correctly functioning digital computer should give the same result when asked to compute the same problem.
One of the factors that led to the eventual dominance of digital computers is their general-purpose nature. Unlike an analog computer, a digital computer can, in principle, be programmed to compute just anything. This is not just an arbitrary assertion, either. Alan Turing proved[1] that there exists an abstract machine we today call a Universal Turing Machine (UTM) that can simulate the action of any other Turing machine. Every mathematical proposition can, in principle, be converted into a Turing machine, that is, the proposition can be reformulated as an assertion about the behavior of a Turing machine (whether it will halt or not). Since a UTM can simulate the action of any Turing machine, and since every mathematical proposition can be reformulated as an assertion about the behavior of a Turing machine, it follows that a UTM can compute any mathematical function[2]. Modern general-purpose math software like Matlab or Mathematica are a logical consequence of Turing's discovery. The broad array of things that a modern computer can simulate - everything from chaotic systems to virtual worlds - shows just how wide the implications of generality really are. By contrast, this generality principle is not true of any known[3] analog computer.
But let's go back to the beginning, back before computers. What is computation? The first men to understand how to multiply were able to accomplish a given task - measuring area or the product of multiple shipments of a fixed size, and so on - using a short-hand: the multiplication method (whichever method they happened to employ). For example, let us say there is an area 200 feet long and 50 feet wide. What is the total area available in these bounds? A naive method would require laying down, say, 50 rows of 200 stones - 10,000 stones in all - and summing the result. This is time-consuming and it becomes even more time-consuming as the dimensions increase. But a multiplication method allows the same result to be calculated quickly, using a small amount of computational aids (abacus, clay tablet, papyrus, slide-rule or whatever).
The origin of computation - or calculation, at least - lies in the need to save time and resources in the reckoning of figures. But the advent of modern mathematical ideas - algebraic geometry, the calculus, group theory, and so on - brought with it a new view of numbers not as entities existing in themselves, but as constructs of more basic concepts - sets, operators, relations, functions, and so on. The domain of modern mathematics focused primarily on continuous entities - the real number line, polynomial functions, and so on.
We know today that the minimum requirements for general-purpose computation are very minimal. You can build a computer out of water pipes, reservoirs and valves. You can build a computer out of gears, levers and pullies. You can implement a computer on many different substrates. The real challenge is not achieving generality but, rather, achieving efficiency.
There are several important computational paradigms but arguably the two most important paradigms are: stateful computation and stateless computation. In digital circuit design, these are called synchronous logic and asynchronous or combinatorial logic. Stateful computation refers to computation that requires some memory of past state in order to proceed. Stateless computation, by contrast, is purely relational from inputs to outputs. This latter type of computation is sometimes called "functional" as it closely resembles mathematical functions that do not rely on any internal state. Both forms of computation are, ultimately, equivalent but the need to rely on state has important physical ramifications.
What are the limits of computation?
There are actually two components to this question. (1) Are there things that cannot be computed? Are there problems so difficult that no finite amount of computation can solve them? (2) Of the things that can be computed, what is the lower bound on the amount of computation required to solve a problem in a given problem class?
The answer to (1) is yes, there are problems that cannot be computed in finite time or space. The field of mathematics that studies these problems is called computability theory. The field of mathematics that studies (2) is called computational complexity theory.
Computability theory studies problems that cannot be computed. Defining exactly what this means is a bit tricky, since even to state a problem is to "compute it" in a very superficial way. There are two key ideas behind the concept of computability: (a) any computation which completes (and returns a result) takes up a finite amount of time, however large and (b) any computation that completes in finite time must also be described in a program of finite length. Any problem whose solution cannot be computed in finite time or which cannot be described with a program of finite length is not computable. It is called uncomputable or, roughly synonymous, undecidable. Finally, there exist many interesting problems in the class of uncomputable problems. Arguably, all of the most interesting problems are in the class of uncomputable problems. The study of the limits of computation, therefore, is not merely pedantic.
Computational complexity theory is much less black and white than computability theory. If a problem is proved uncomputable, it's uncomputable and there's really nothing more to be said about it. But the computational complexity of a given problem crucially depends on the assumptions about the available computational system and the scale of the problem that is of interest. For example, certain problems of interest to cryptographers that are very difficult on a classical computer would be quadratically easier to compute on a quantum computer. Thus, the computational complexity of a problem does not inhere solely within the problem itself, but within the combined set of the problem and the system on which the problem is to be computed.
It is difficult to precisely define what constitutes a feasibly computable problem versus an infeasibly computable problem. For example, some problems have good asymptotic bounds but an infeasibly large constant term, meaning, there is a massive cost to solving the problem for initial terms (as in a mathematical sequence), but solving the problem gets much easier for later terms.
It is difficult to precisely define what constitutes a feasibly computable problem versus an infeasibly computable problem. For example, some problems have good asymptotic bounds but an infeasibly large constant term, meaning, there is a massive cost to solving the problem for initial terms (as in a mathematical sequence), but solving the problem gets much easier for later terms.
What are the physical resource bounds on computation?
There are physical resource bounds on computation. It is difficult to give meaningful experimental answers to this question but theoreticians have given several attempts. From Wikipedia:
- The Bekenstein bound limits the amount of information that can be stored within a spherical volume to the entropy of a black hole with the same surface area.
- Thermodynamics limit the data storage of a system based on its energy, number of particles and particle modes. In practice it is a stronger bound than Bekenstein bound.
- Landauer's principle defines a lower theoretical limit for energy consumption: kT ln 2 joules consumed per irreversible state change, where k is the Boltzmann constant and T is the operating temperature of the computer. Reversible computing is not subject to this lower bound. T cannot, even in theory, be made lower than 3 kelvins, the approximate temperature of the cosmic microwave background radiation, without spending more energy on cooling than is saved in computation.
- Bremermann's limit is the maximum computational speed of a self-contained system in the material universe, and is based on mass-energy versus quantum uncertainty constraints.
- The Margolus–Levitin theorem sets a bound on the maximum computational speed per unit of energy: 6 × 1033 operations per second per joule. This bound, however, can be avoided if there is access to quantum memory. Computational algorithms can then be designed that require arbitrarily small amount of energy/time per one elementary computation step.
---
1. On computable numbers, with an application to the Entscheidungsproblem, from Proceedings of the London Mathematical Society, (Ser. 2, Vol. 42, 1937)
2. I am using the term "mathematical function" loosely, here. There are a lot of formal details surrounding the problem of exactly what we mean by "mathematical function" but after all of these details have been taken into account, the assertion stands.
3. Or, at least, of any practical analog computer. There are a class of analog computer models referred to as General Purpose Analog Computers or GPACs, but no one has built a GPAC. In future posts, we will begin to discuss quantum computation which is actually a sub-category of analog computation and does have computational generality.
No comments:
Post a Comment