But, when a rule is extremely complex, what is in conformity with it passes for irregular. Thus, one can say, in whatever manner God might have created the world, it would always have been regular and in accordance with a certain general order. But God has chosen the most perfect world, that is, the one which is at the same time the simplest in hypotheses and the richest in phenomena, as might be a line in geometry whose construction is easy and whose properties and effects are extremely remarkable and widespread. - Gottfried LeibnizWe are going to shift gears at this point in the series. The previous posts form a solid foundation that will enable us to build a strong framework on top. The goal of this series is to posit a cosmology based on the massive, underground shift in modern mathematics and science that has occurred over the last 100 years, a shift that has not fully percolated through the consciousness of either the public or many academics. These changes have opened the doors to entirely new ways of thinking about how things hang together, in the broadest sense, with all other things.
The intellectual legacy of the West, and in this connection let me recall Pythagoras, Plato, Galileo and James Jeans, states that "Everything is number; God is a mathematician." We are now beginning to believe something slightly different, a refinement of the original Pythagorean credo: "Everything is software; God is a computer programmer." Or perhaps I should say: "All is algorithm!" Just as DNA programs living beings, God programs the universe. - Gregory Chaitin[1]Chaitin has argued extensively for a parallelism between the idea of computation as a relation between programs and their outputs, and the Universe as a relation between past and future. His argument is based on the nature of science; from the information-theoretic perspective, science is data-compression.
For any finite set of scientific or mathematical facts, there is always a theory that is exactly as complicated, exactly the same size in bits, as the facts themselves. (It just directly outputs them “as is,” without doing any computation.) But that doesn’t count, that doesn’t enable us to distinguish between what can be comprehended and what cannot, because there is always a theory that is as complicated as what it explains. A theory, an explanation, is only successful to the extent to which it compresses the number of bits in the facts into a much smaller number of bits of theory. Understanding is compression, comprehension is compression! That’s how we can tell the difference between real theories and ad hoc theories.Here is the parallel that Chaitin is drawing:
Program --> Computer --> OutputThis parallelism has important consequences and ties back to the parallelism we drew between Shannon's model of a communication system and the information-theoretic model of a scientific experiment (see Part 4).
Theory --> Computer --> Mathematical or scientific facts
Turing gave us the universal Turing machine, U. U is a universal function - it can stand in for any other mathematical function. It does not matter if the function's domain is continuous because U is a symbol-processor and axiomatic mathematics always consists in relations between symbols.
Solomonoff's theory of induction gives us a universal prior probability distribution. In turn, this grounds Bayesian inference, giving us a universal theory of inductive inference - universal induction. This allows us to build autonomous systems that can perform "turn-crank" model-building, no human intelligence or ad hoc neural networks required.
Hutter's search gives us a universal optimization algorithm - this can be thought of as a universal optimizing compiler. This can be used to accelerate algorithms described in non-optimized form (say, academically or pedagogically) without human intervention.
Hutter's AIXI gives us a universal agent. This allows us to build autonomous decision-making systems that can make arbitrarily complex decisions, optimally, without human intervention.
Turbocodes show that theoretically optimal bounds matter, especially in pure mathematics. If real systems operate at an efficiency that is lower than the provably possible maximum efficiency, this means that continued search for greater efficiency will continue to pay off. Turbocodes are an example of a real system that achieves the maximum theoretical efficiency.
Meta-materials show that naturally occurring material properties do not circumscribe the boundaries of possible material properties. It may be that natural material properties are just the degenerate case of possible material properties (see Degeneracy Wiki). In other words, we may be looking at the material world upside-down in the general case - we see it as a picture of perfection that we can at best hope to imitate, but never exceed. The Simulation Hypothesis suggests that there are other possible reasons for material properties (we will return to this topic in a future post).
These results about algorithmic information ... are a kind of economic meta-theory for the information economy, which is the asymptotic limit, perhaps, of our current economy in which material resources (petroleum, uranium, gold) are still important, not just technological and scientific know-how...
If we had unlimited energy, all that would matter would be know-how, information, knowing how to build things. And so we finally end up with the idea of a printer for objects, a more plebeian term for a universal constructor. There are already commercial versions of such devices. They are called 3D printers and are used for rapid prototyping and digital fabrication. They are not yet universal constructors, but the trend is clear.Note that nano-technology need not necessarily be nano-scale in order to achieve a universal constructor - it only needs to be energy-feasible, that is, able to harness sufficient energy to be able to replicate itself.
The key to Chaitin's observations is the term information economy. Economization of information has counter-intuitive results. We can see this by looking at algorithmic probability. The Solomonoff prior (Part 9) remaps the universal probability distribution from that imposed by prefix-free coding. To see why this is the case, consider all strings x in XL of length |x|=L. Under the universal prior, the most probable strings in this set are those for which K(x) is smallest. These strings are rare among strings of size L, but they are more probable by the universal prior probability distribution because they are more likely to have been the input to U. Given mild assumptions about the Universe, the universal probability distribution answers the question of why there is order rather than disorder. In addition, it throws into question the whole idea that the Universe has any origin at all. There is no more reason to suppose that a Universe that obeys the universal distribution has an origin than there is to suppose that numbers have an origin.[3]
Chaitin has argued that the halting probability - a perfectly well-behaved number that is provably unknowable - shows that mathematics is more like biology than like physics:
Since the bits of Ω in their totality are infinitely complex, we see that pure mathematics contains infinite complexity. Each of the bits of Ω is, so to speak, a complete surprise, an individual atom of mathematical creativity. Pure mathematics is therefore, fundamentally, much more similar to biology, the domain of the complex, than it is to physics, where there is still hope of someday finding a theory of everything, a complete set of equations for the universe that might even fit on a T-shirt...
Establishing this surprising fact has been the most important achievement of algorithmic information theory, even though it is actually a rather weak link between pure mathematics and biology. But I think it’s an actual link, perhaps the first.[4]Among the sciences, physics has indisputably been the closest companion of pure mathematics. Chaitin argues in [2] (§4, "Information Economy") for a parallelism between computation and DNA, in the most abstract sense[5]:
Software --> Universal Constructor --> Physical systemHe presents four examples to illuminate the idea:
DNA --> Development/Pregnancy --> Biological system
Seed --> Soil --> Tree
- Magic, in which knowing someone’s secret name gives you power over them
- Astrophysicist Fred Hoyle’s vision of a future society in his science-fiction novel Ossian’s Ride
- Mathematician John von Neumann’s cellular automata world with its self-reproducing automata and a universal constructor
- Physicist Freeman Dyson’s vision of a future green technology in which you can, for example, grow houses from seeds.
The emerging technology that may someday lead to Dyson’s utopia is becoming known as “synthetic biology” and deals with deliberately engineered organisms. This is also referred to as “artificial life,” the development of “designer genomes.” To produce something, you just create the DNA for it.
Some key points in Dyson’s vision [include]: Solar electrical power obtained from modified trees. Other useful devices/machines grown from seeds. Houses grown from seeds. School children able to design and grow new plants, animals. Mop up excessive carbon dioxide or produce fuels from sugar.
Credit below[6] |
The parallelism that Chaitin is drawing between a software system and a living system is more than just a metaphor. Living systems are actually governed by information-theoretic law. As an example of how profoundly this is the case, consider the presence of error-correction processes[7] in DNA sequences. These processes are tantamount to the error-correcting codes of information theory.
One of the things that I do not mean by digital physics is that the Universe is discrete to the exclusion of being continuous. Quantum physics makes it abundantly clear that Nature, at root, holds both the discrete and the continuous on an equal ontological footing. There is no need to deny one or the other. While it may seem that information-theory is rooted in the mathematics of the discrete, this is a misconception. In fact, Shannon's information theory originated in radio-frequency technology, which is a fully continuous (analog) domain. The Kullback-Leibler divergence measures relative entropy between entropy functions over a continuous domain and can be used to generalize information-theory from the discrete domain to the continuous domain. I am using the term "digital" to emphasize the underlying computational structure of the Universe. But, as we asserted already, if the Universe is a computer it is a quantum computer, not a classical computer. It is neither discrete nor continuous - it is both.
While I am presenting a cosmology in this series, I will be presenting a view that would not ordinarily fall under the term "cosmology". I will be presenting a Universe full to bursting with structured complexity - a Universe that is vibrantly alive, highly connected and non-random. The Universe - on the cosmic scale - is lush and saturated with life wherever life can be supported in exactly the same way that Earth's ecosystem is.
Ere many generations pass, our machinery will be driven by a power obtainable at any point of the universe. This idea is not novel. Men have been led to it long ago by instinct or reason; it has been expressed in many ways, and in many places, in the history of old and new. We find it in the delightful myth of Antaeus, who derives power from the earth; we find it among the subtle speculations of one of your splendid mathematicians and in many hints and statements of thinkers of the present time. Throughout space there is energy. Is this energy static or kinetic? If static our hopes are in vain; if kinetic — and this we know it is, for certain — then it is a mere question of time when men will succeed in attaching their machinery to the very wheelwork of nature. - Nikola Tesla
1. Leibniz, Information, Math and Physics, Gregory Chaitin
2. Metabiology, Gregory Chaitin
3. This is why this is a series on cosmology; I posit that a Universe that obeys universal mathematical laws (not just physical laws) is best thought of as eternal, without beginning or end, because a beginning requires an explanation (why/when did the Universe begin?) and something that requires an explanation is more complicated than something that does not (there's nothing to explain if the Universe has always been here and always will be).
4. Speculations on biology, information and complexity, G. J. Chaitin
5. Chaitin's metabiology thesis should not be confused with DNA computing, which is best viewed as an engineering problem in the design of computational hardware
6. "Plant a mobile phone, grow a tree"
7. DNA Proofreading, Correcting Mutations during Replication, Cellullar Self Directed Engineering
4. Speculations on biology, information and complexity, G. J. Chaitin
5. Chaitin's metabiology thesis should not be confused with DNA computing, which is best viewed as an engineering problem in the design of computational hardware
6. "Plant a mobile phone, grow a tree"
7. DNA Proofreading, Correcting Mutations during Replication, Cellullar Self Directed Engineering
No comments:
Post a Comment