Sunday, August 27, 2017

Cryptocurrency Notes

Let's start with the eight fallacies of distributed computing:

  • The Network is Reliable
  • Latency is Zero
  • Bandwidth is Infinite
  • The Network is Secure
  • Topology Doesn’t Change
  • There is One Administrator
  • Transport Cost is Zero
  • The Network is Homogeneous

Let's add another bullet to this list:

  • Storage is free

The Bitcoin network has escaped some of these fallacies, but not all of them. What is so remarkable about Bitcoin is that it has solved one problem that was previously thought unsolvable - Bitcoin solves "The Network is Secure" fallacy. Bitcoin's blockchain is not only fault-tolerant, it is security-fault tolerant. What I mean by "security fault" is any local security breach that allows an unauthorized user to access private information that he or she should not have been allowed to access. Bitcoin has no single point of failure, that is, it has no central, trusted agent. In fact, it is not even vulnerable to many-point security failures. To a first order of approximation, Bitcoin is secure up to 50% of CPUs on the network being hacked, simultaneously. This is a remarkable property when you do the math on the difficulty of orchestrating a many-point security breach. Even if you were so good at hacking that you could break into any given computer with 90% probability, the probability of orchestrating a simultaneous break-in of 100 independently controlled computers is 0.9100 = 0.00265%. There are vastly more than 100 independently controlled computers (even if you aggregate them per operator) on the Bitcoin network.

The most important fallacies that Bitcoin has fallen prey to are:

  • Latency is Zero
  • Bandwidth is Infinite
  • Storage is free

Each of these fallacies is showing up in the Bitcoin network today. Transactions can take hours to be fully confirmed (latency is not zero), global throughput of the network is throttled to a handful of transactions per second (bandwidth is not infinite) and the blockchain is exploding in size at an astounding rate.

Other fallacies that indirectly affect Bitcoin:

  • Topology Doesn’t Change
  • Transport Cost is Zero
  • The Network is Homogeneous

The core of the blockchain protocol is based on utilizing proof-of-work chains to decide which copy of the blockchain is the correct copy. This method of maintaining a secure, distributed copy of a ledger is simple and it is foolproof but it utilizes an immense amount of computation to decide a simple problem - which copy of the latest transactions is the true copy (sufficiently old transactions never come into question). The amount of computation utilized is a consequence of assuming too little about the topology of the network at any given time. In short, Bitcoin goes overboard with the "Topology Doesn't Change" fallacy. By tracking its own topology and parceling work out to subnets, the amount of computation could be massively reduced from that required by the Bitcoin protocol. This does not require assuming that the network topology does not change; it only requires to assume that the network is not completely changing, moment to moment. SegWit is one attempt to address this problem.

Non-zero transport costs are affecting the Bitcoin network in the form of the wildly fluctuating transaction fees that are being charged of Bitcoin users. While transport costs are not zero, they are not an arbitrary function of the whims of a node. In short, Bitcoin has no systematic way for transport costs to be arbitraged across the network so that users can find out a general market price for transactions.

Non-homogeneity of the network affects Bitcoin because the blockchain assumes that nodes can store huge volumes of data at almost no cost. Some Bitcoin apologists argue that "running a partial node" or "using a third-party Bitcoin service" are solutions to this problem but they really are not. There is no good reason that a wristwatch with a CPU inside of it should not be able to securely transact on a distributed, digital network.

Ethereum is the #2 cryptocurrency behind Bitcoin. Ethereum is quite complex. Where Bitcoin does one thing well (securely trade reusable proofs-of-work), Ethereum tries to implement general-purpose computation in its core protocol specifically in order to enable "smart contracts". It charges users for each unit of calculation that is performed by the network using a quantity referred to as "gas" (by analogy to petroleum). While smart-contracts are only going to become more important in the future, Ethereum's design seems to me to be overly centralized and ungainly.

Everything that every cryptocurrency does today can be done using an abstraction called a distributed, secure timestamp service. Bitcoin trivially implements a secure timestamp service - just transact on the Bitcoin network, and use the transaction ID as your timestamp. The Bitcoin network records the time of the transaction in the blockchain and you cannot alter or delete the blockchain. Thus, it is a secure timestamp. Many other secure, distributed services can be built on top of this one building block.

Unfortunately, Bitcoin's design is inevitably driving up transaction fees. This means that micro-transactions are going to become increasingly infeasible on the Bitcoin network. Litecoin was built because it was foreseen that this would inevitably be a problem with Bitcoin. At the moment, Litecoin is a more feasible candidate for a timestamp service, but it suffers from the same distributed computing fallacies as Bitcoin.

It is possible to build a secure, distributed timestamp service that utilizes a proof-of-work mechanism similar to that employed by Bitcoin but which avoids the latency, bandwidth and storage fallacies that plague Bitcoin. In this arrangement, the timestamps themselves act as the "core" and other services - such as secure, digital currency (distributed ledger) - can be easily built on this core. Secure, distributed data-storage; secure, distributed computation (for smart-contracts); secure peer-to-peer messaging and many other services can be built using this architecture.

Stay tuned...

Saturday, August 19, 2017

Notes on a Cosmology - Part 19, Virtualization cont'd

A computer game is very much unlike the real-world. Your consciousness is not chained to your avatar in the game. You can choose an avatar. You can choose to do something other than play the game. Even the most realistic games are massively unlike the physical world, even measured solely in terms of visual detail. There are many other differences.

In the PMM thought-experiment, we waved away these differences - being able to temporarily forget that you are in a simulation was good enough for our purposes, like the suspension-of-disbelief we experience when reading a good novel or watching an engrossing movie. But let's take the PMM thought-experiment one step further by invoking some sort of phlebotinum. Let us posit that this phlebotinum can be ingested, inhaled, topically applied, placed on the skin like crystals, or whatever. The effect of this phlebotinum is that the user can travel through the parallel universes we posited in Part 18. Specifically, the user of this substance can travel to a universe that corresponds more or less exactly to his or her inner-most wishes and desires. This travel is unrestricted by the bounds of time and space. The user is freed from the possibility of death, regardless of the hazards the user chooses to encounter. Further, we posit that this travel imposes no psychic toll on the user - it is effortless and painless.

The Twilight Zone episode, A Nice Place to Visit imagines a man who has found himself in a very similar situation, though it is framed in a supernatural context, instead of a naturalistic context, as we have done here.
"Henry Francis Valentine, ... calls himself 'Rocky', because that's the way his life has been – rocky and perilous and uphill at a dead run all the way. He's tired now, tired of running or wanting, of waiting for the breaks that come to others but never to him, never to Rocky Valentine. A scared, angry little man. He thinks it's all over now but he's wrong. For Rocky Valentine, it's just the beginning." 
After robbing a pawn shop, Henry "Rocky" Valentine is shot by a police officer as he tries to flee. He wakes up to find himself seemingly unharmed by the encounter, as a genial elderly man named Pip greets him. Pip explains that he has been instructed to guide Rocky and give him whatever he desires. Rocky becomes suspicious, thinking that Pip is trying to swindle him, but Pip proves to have detailed information on Rocky's tastes and hobbies. Rocky demands that Pip hand over his wallet; Pip says that he does not carry one, but gives Rocky $700 directly from his pocket and says that he can provide as much money as Rocky wants. 
Thinking that Pip is trying to entice him to commit a crime, Rocky holds him at gunpoint as the two travel to a luxurious apartment. Pip explains that the apartment and everything in it are free, and Rocky starts to relax. However, his suspicions rise again when a meal is brought in, and he demands that Pip taste it first to prove that it is not poisoned. When Pip demurs, claiming that he has not eaten for centuries, Rocky shoots him in the head, only for the bullets to have no effect on Pip at all. Rocky realizes that he is dead, and he believes that he is in Heaven and Pip is his guardian angel. 
Rocky visits a casino, winning every bet he makes as beautiful girls gather around him, and enjoys being able to pick on a policeman after Pip shrinks him. Later, Rocky asks Pip if he can see some of his old friends who have also died, but Pip says that this world is for Rocky alone. Except for the two men, no one in it is real. When Rocky wonders what good deeds he could have done to gain entrance to Heaven, Pip takes him to visit the Hall of Records. Rocky looks through his own file and discovers that it only contains a list of his sins, but decides not to worry about it since God apparently has no problem with his being in Heaven.

 

One month later, Rocky has become thoroughly bored with having his whims instantly satisfied. He calls up Pip and asks for a challenge in which he might run the risk of losing. Pip offers to set up a bank robbery, but Rocky abandons the idea, saying that a pre-planned outcome would take the fun out of the crime. He tells Pip that he is tired of Heaven and wants to go to "the other place," to which Pip retorts, "Heaven? Whatever gave you the idea you were in Heaven, Mr. Valentine? This is the other place!" Horrified, Rocky tries in vain to open the now-locked apartment door and escape his "paradise" as Pip laughs malevolently at his torment.
Narratives surrounding the subject of life-extension are becoming increasingly important in the modern world. We are able to extend our minds using technology in ways that make us, relative to our ancestors, nearly god-like. We are bumping up against the real possibility of indefinite life-extension - artificial eternal life, more or less.

But we really do not know the very long-run implications of our choices under these conditions. Let us suppose that our children or grand-children will be the first generation to reach escape velocity. Does it make sense for them to follow the "tried-and-true wisdom" of their ancestors that has laid out the template of a supposedly ideal life: get a degree, establish a career, buy a house, marry, have children and prepare for retirement? All of these choices may actually be sub-optimal. In fact, they may be positively detrimental to their actual, long-run well-being. By carefully following the prudential advice of their parents, they may inadvertently end up like the infernal Rocky Valentine; they may wake up to find themselves eternally living out the consequences of their sins without even the release of death which delivered our generation from any long-run negative consequences of our supposedly ideal life-pattern.

In Valentine's case, he had become imprisoned by the demonic being, Pip. But we do not have to invoke demons and ghouls to surmise that it is possible that we might inadvertently invent a terrifying future and then curse ourselves with endless life within that future. Some of the leading lights in the development of artificial intelligence are worried about the possibility that we are inadvertently building the agents of our own extinction. But extinction is not actually the worst possible outcome. Even worse, we might invent indefinite life-extension or make ourselves actually unable to die and then end up imprisoned by some hostile, supremely powerful telos[1].

We tend to associate happiness with getting what we want. Rocky Valentine is miserable despite being able to have anything he desires. There are two possible causes of his misery: (a) it's unconditionally impossible for him to attain happiness or (b) he needs to have access to a purpose (end) greater than any he can generate internally[2]. In Rocky's case, we know that it is case (a) because he is imprisoned by a demonic being. But if our children found themselves in a similar position in a hypothetical future world, they would not know whether it is (a) or (b). In this case, the only rational course of action is to attempt to determine if (b) is possible, that is, to find out whether it is possible to get access to a higher end than any we can generate internally.

What we are really talking about is how to avoid regret, in the most generic possible sense[3]. Rocky Valentine thought he was in heaven but, later, found out that he had been duped and was trapped in a hellish prison. At this point, Rocky presumably felt regret for all his past crimes. Given his newfound regret, we can safely guess that, if he had access to a time-machine, he would fly back to a time before he committed all the sins that landed him up in this hell and make different choices. Note that time-travel, in this context, is equivalent to the backtracking agent we discussed in Part 18.

The need to time-travel is the result of encountering a problem of some kind in the unfolding history. Any deficiency or unsatisfactoriness in the history of one's existence within our phlebotinum-fueled trip through the multiverse would result in the desire on the part of the partaker to backtrack and try over. This is not unlike the situation that many gamers are familiar with when they encounter a trap in a game level and realize - too late - that they should have made a better choice. "I wish I had gone up the stairs instead of turning to the right!"

Let us informally define a concept I will call regrettal in analogy to the surprisal of Shannon information-theory. Regrettal can be measured as the deviation between the actual length of the history path of a participant immersed in the simulation (with back-tracking) versus the length of the final path they have taken as measured without any back-tracking down regretted paths. There is a formal concept of external regret that is used in game theory[4].

The thesis of this post is that a rational actor would not partake of the phlebotinum unless he had in hand a proof (formal, physical) that partaking the substance is zero-regrettal. Prior to partaking of phlebotinum, he had no ability to travel through the multiverse so, after taking it, he will not be able to return to the moment prior to taking it (to "undo" the action). But it could turn out that, in the future, the only action the user wants to take is return to the moment prior to partaking of the phlebotinum!

But what does this have to do with virtualization?

In the last post, we concluded that it is possible for an agent that is higher on our Ω-based Kardashev scale to virtualize an agent that is lower on that scale without utilizing hard privilege-limits or silent privilege-limits. This means that the "hardware" that the universe is hypothetically running on could be egalitarian even if we are inside a simulation.

Let's extend the analysis to two agents that are on the same level - in other words, both agents know exactly the same prefix of Ω, Ωm. If one-way functions exist (an unsolved mathematical problem), our two agents would be able to generate useful randomness with the help of an environment, µ. The procedure is as follows: the environment asks each agent for a random number (this random number is not useful to the agent itself, since it generated it), then it applies a one-way function to each number and then it combines the result using modular arithmetic (e.g. modulo-2 addition). This randomness does not rely on either agent "helping" the other or giving the other agent any insight into its current behavior, choices or operation. Given useful randomness, the agents can interact with one another in unpredictable ways, even though neither agent has access to more bits of Ω. In other words, the interactions of these agents are not a trivially solved game (ala Tic-Tac-Toe). Even though the behavior of each agent is computable (algorithmic), the activity of each agent is unpredictable to the other.

We now have the properties of action (choice) and liveness (through useful randomness). The agents can interact in games of perfect information (like Chess) or games of imperfect information (like Poker). Between the two of them, they are always faced with the possibility of regret because it is possible that one agent has utilized its available resources in solving one kind of problem while the other agent has utilized its resources in solving another kind of problem; if one or the other utilization has turned out to be advantageous in a particular game, the game will go to the agent with the advantage. Unlike in the case of two agents on different rungs of the Ω-hierarchy, this cannot be foreseen or avoided. Both agents run the risk of losing (regret).

The next property we need to discuss is lock-in. So far, we have assumed that the agents are bound by the time-parameter - lock-in is a simple matter of alternating the move. However, real-time games (which is the class of games that most closely resembles the world-as-we-know-it) open up a new realm of possible strategies - timing strategies[5]. And this is where the Simulation Hypothesis begins to loom large - if we are in a simulation, then timing (simultaneity) is much more complicated than a simple, linear time-parameter. Timing, in the broadest possible sense, is a function of what is being computed and how good your best algorithm is at computing it.

The Internet[6] and Bitcoin, in particular, are a good place to start to understand the difference between time in a simulated environment and time in an idealized geometry. In an idealized geometry, we simply wave a hand and say, "At time t, all objects {o0, o1, o2, ...} are at locations {x0, x1, x2, ...}" Einsteinian relativity alters this Newtonian concept of simultaneity but still preserves idealized simultaneity relative to any given inertial frame. What I am asserting is that there is no algorithmically simple definition of simultaneity in a simulated physics in which there is more than one agent.

The reason that simultaneity is complicated is that there is no easy way to define trusted timestamps across a distributed set of computational resources (e.g. the Internet or, more broadly, the universe itself). The Bitcoin network can function as a trusted timestamp service[7]. A simulated universe with multiple agents would have to have a similarly bulletproof mechanism for verifying time-ordering in order to resolve timing disputes to the satisfaction of all agents with a stake in a particular event.


This is why, in Part 16, we placed time below causality in the hierarchy of the quantum monad. Time, in a simulated universe, is an emergent phenomenon. It is the result of the activity of the simulation, not something that stands above and governs the simulation. In a distributed computation, there is no central authority to wave his hand and make all events simultaneous by fiat. Simultaneity must be weakened from the idealized geometric sense (or even the Einsteinian sense of relativity to inertial frames). Simultaneity becomes, instead, whatever agents will accept as simultaneity.

Simultaneity is the basis of lock-in. Like publishing an embarrassing photo to facebook, you can't take back something you published to the record but it is simultaneity that defines what has actually been published and what has not! Lock-in, in turn, is the basis of regret. The desire to "take back" a choice arises from the irreversibility of the move.

Conclusion

We asserted that a rational agent would not partake of our genie-in-a-bottle phlebotinum - even though it can and will fulfill his every wish! - until he has a proof in hand that he will not regret partaking of it. We then delved into the topic of regret and found that regret is, indirectly, connected to the problem of one-way functions because we need one-way functions to construct useful randomness, giving rise to liveness and the possibility of simultaneous games (interactions).

Here's the kicker - we don't know how hard it is to solve the problem of the existence of one-way functions. It is possible that it is much harder to solve the problem of the existence of one-way functions than it is to simulate a human brain - or even every human being on the planet, simultaneously. If this is the case, we might find ourselves in a position where we are building more and more powerful simulators to solve the problem of one-way functions and begin to simulate humans, human societies - or even the entire planet - merely as a by-product of the search for the proof that one-way functions are possible, or impossible. If the Simulation Hypothesis is true, there are good reasons to suspect that the simulation will not make this any easier than brute-force[8].

In summary, Bostrom's Simulation Argument might have misidentified the motivation of the hypothetical simulators. Perhaps they are not motivated by mere curiosity about their ancestors; perhaps this is computational warfare.

Next: Part 20, The Five Ways

---

1. This is probably the reasoning behind the removal of access to the Tree of Life, recorded in Genesis 3:22-24, "And the Lord God said, 'The man has now become like one of us, knowing good and evil. He must not be allowed to reach out his hand and take also from the Tree of Life and eat, and live forever.' ... [Afterward] he placed on the east side of the Garden of Eden cherubim and a flaming sword flashing back and forth to guard the way to the tree of life."

2. We know this because Rocky already has access to unlimited means to whatever end he happens to choose. If the problem is not the available means, then it must be the chosen end(s).

3. Learning, Regret minimization, and Equilibria [PDF], Multi-armed bandits

4. From External to Internal Regret [PDF]

5. In boxing, for example, feinting a punch is a kind of timing strategy.

6. Google spans entire planet with GPS-powered database

7. Bitcoin Wiki: Block timestamp

8. Say I am an enlightened being of some kind that operates countless universe simulations. I am entangled in a protracted war (or game, if you like) with a being similar to myself and the battlefield is my simulations. Any help I provide lower beings in solving a non-trivial computational problem might assist my enemy in fighting me simply by helping him compute.

Thursday, August 17, 2017

Notes on a Cosmology - Part 18, Virtualization

In the 2016 Isaac Asimov Memorial Debate, Is the Universe a Simulation?[1], David Chalmers makes the following remarks:
The simulation hypothesis says we’re in a computer simulation. A computer simulation’s a computation that was created by someone for a purpose. So, basically, the simulation hypothesis is the computation hypothesis plus something else about someone who created it. And around here is where you might be able to get a little theological and say, okay, well, it’s a naturalistic version of the god hypothesis. [There is a] much weaker hypothesis that the universe is some form of discrete computation and is completely neutral on the question of whether this is actually a simulation in the sense of something that was created by a simulator.
What Chalmers is talking about is telos. Telos is generally associated with theology these days, but many science-fictional universes have imagined telic theories of our world based on design by a non-deity. Perhaps our world was created as a scientific experiment by alien-like beings who are able to engage in interstellar travel as easily as we drive down a highway. Or choose your favorite theory. Because any arbitrary way of imagining a telic origin for the world is as good as any other, we tend to throw our hands up and choose an atelic basis for science. If we have been designed by a greater being (or beings) then, until this is revealed to us, we have no scientific basis to ascertain design or rule it out - it is a metaphysical or theological question.

Telos does not necessarily have to involve personhood, as humans experience it. For example, animals clearly make choices but are also clearly not self-reflective persons in the sense that human beings are. Imagine some inter-galactic, animal-like consciousness with the ability to spawn life-bearing planets but unable to comprehend the complex behavior of the living systems in those planets. In this case, we have been created by a telic entity that is greater (more powerful) than us but which is less intelligent than us.

Virtualization

In modern computation, virtualization is a widely used technology. Virtualization is a consequence of the capacity of a universal Turing machine, U, to simulate any other Turing machine, including itself or any other universal Turing machine U', U'', ... As a concrete illustration, the following image depicts a recent version of Windows Server running multiple, nested instances of itself:


In the context of computer science, we tend to use the terms "simulation" and "virtualization" synonymously. But in the context of cosmology, simulation tends to mean simulation of something, that is, of the laws of physics, or whatever. In this context, all simulation can be considered a kind of virtualization - in short, virtualization is a more general term than simulation. In this post, we will be focusing on virtualization in order to think about computers simulating other computers without respect to any external system, such as a physical universe. This distinction is related to, but independent of the distinction between a computation hypothesis (no telos) and the simulation hypothesis (telos).

The particular feature of virtualization I want to focus on is termed privilege level. When two or more computation environments are executing on the same hardware (same CPU, same memory banks, and so on), there must be some organized system of sharing resources in such a way that the parent environment is not corrupted by its child environment(s). Otherwise, when the parent environment launches a virtualized child environment, that environment is liable to crash the parent - and itself, too. In real-world computation systems, this protection is enforced through privilege levels. A lower-privileged process may not access the resources of a higher-privileged process. There are many different ways to implement privilege levels. Non-virtual privilege levels have long been a common feature of mainstream operating systems, allowing the operating system to protect itself from the applications it is running. But so-called "full virtualization" requires a more thoroughgoing approach to ensure that the system's resources cannot be accessed by virtualized environments in a way that will corrupt the host (parent) environment.

A common, naive approach to enforcing privilege-levels is to check each access that the lower-privileged environment attempts. "If a lower-privileged environment attempts to access the higher-privileged memory area from address 1000 to address 2000, interrupt it and transfer control to the higher-privileged environment so it can decide what to do next." I will refer to this as hard privilege level enforcement. In modern virtualization systems, however, much of this hard privilege-checking is bypassed by simply presenting a virtualized memory and input/output environment to the virtualized process. In this way, the virtualized (child) process is free to access any memory address it likes, but these addresses are silently remapped by the computer hardware to other memory addresses. Since the virtualizing (parent) environment controls how this remapping occurs, it is able to arrange memory in such a way that the virtualized (child) environment will never be able to touch the parent's memory and other resources. I will refer to this as silent privilege-level enforcement.

Choice

Our cosmological theory up to this point has been summed up by the concept of the quantum monad. The quantum monad is the union of the universal prior and Seth Lloyd's QC thesis (that the Universe is indistinguishable from a quantum computer). What is missing from this model is any kind of concept of choice. The universal prior is, in a way, too powerful - it ranges over every possible universe, including the ones in which I choose A and the ones in which I choose the opposite of A. Thus, every game-theoretic interaction between you and me (choice) is present in the universal prior, and all of them are equally probable. Thus, there are no choice strategies in the universal prior - the universal prior moots game theory.
In the context of artificial intelligence, Hutter proposes a particularly concrete notion of “possible world”: An environment, in his sense, is a Turing machine which takes an agent’s actions as input and produces a sequence of observations and rewards as output. Given a prior distribution µ over environments, Hutter defines a Bayesian agent AIµ which acts in such a way as to maximize the expected reward, given µ. As usual, Hutter assumes that for every particular environment, AIµ can compute exactly what observations and rewards a given sequence of actions leads to.
However, AIµ cannot in general itself be implemented as a Turing machine, which is problematic in game-theoretic contexts (where an agent’s environment contains other agents). To see this, consider the game of Matching Pennies, in which two players each choose between two actions (“heads” and “tails”); if the players choose the same action, the first player wins a dollar, if they choose differently, the second player wins. Suppose that both players’ decision-making processes are Turing machines, and suppose that both players know the exact source code of their environment, including the source code of their opponent’s decision-making algorithm. (Certainty about the environment is of course equivalent to a µ which assigns probability 1 to a certain environment.) Finally, assume that like AIµ, both players choose optimally given their information about their environment.
In this set-up, by assumption, both players’ decision-making processes are deterministic; each player either definitely plays “heads” or definitely plays “tails”. But neither of these possibilities is consistent. For example, if the first player chooses heads and the second player can predict this, the second player will choose tails, but if the first player can predict this in turn, it will choose tails, contradicting the assumption that it chooses heads.
The problem is caused by the assumption that given its opponent’s source code, a player can figure out what action the opponent will choose. One might think that it could simply run its opponent’s source code, but if the opponent does the same, both programs will go into an infinite loop. Even giving the players access to a halting oracle does not help, because even though a machine with access to a halting oracle can predict the behavior of an ordinary Turing machine, it cannot in general predict the behavior of another oracle machine.[2]
Another way to understand the problem is to imagine two AIXI machines connected together in such a way that each is the other's environment, that is, µ, and the only way to maximize their own reward function is to minimize the other's reward function. No matter what we assume about how these machines will behave in respect to one another, we arrive at a contradiction. Thus, the AIXI model is simply not suitable for use in a general game-theoretic sense. The paper proposes reflective oracles as a technical solution to the problem but the technical details are outside the scope of this discussion.

Let us now embark on a thought-experiment. Suppose we have two computable approximations to AIXI - AIXIc1 and AIXIc2. Furthermore, let us suppose that AIXIc1 and AIXIc2 have access to m- and n-bit prefixes of Omega, respectively. That is, AIXIc1 has access to Ωm and AIXIc2 has access to Ωn where m > n. This access is tantamount to private information. AIXIc1 has an advantage over AIXIc2 in that it can solve 2m-n times more instances of the halting problem in computable time than AIXIc2 can. This might not seem like such a big deal - after all, who cares about whether such abstract machines will halt or not? But remember that we can solve any mathematical problem whose solution can be found with n bits of theory by encoding it in an n-bit program and then solving whether this program halts. In fact, as long as we allow m to become sufficiently larger than n, AIXIc2 will never be able to defeat AIXIc1 in any competitive game[3].

The Great Chain of Being

Now, I can explain a third kind of virtualization privilege-level enforcement that I will call soft privilege-level enforcement. In this arrangement, the child process is free to move anywhere, and the parent (virtualizing) process simply "steps around" the child (virtualized) process. This is only possible if the parent process is always a step ahead of the child process, moving its own resources out of the way before the child process is able to inadvertently trample over them. We can model this situation as a game being played between two agents, where the child process is more or less behaving arbitrarily but the parent process is anticipating the child process's moves. We model the parent process as AIXIc1 and the child process as AIXIc2 and we define the losing condition as any situation where AIXIc2 has accessed the parent's resources in a way that will corrupt the parent's state. To clarify, we are treating the source code of each of these agents as accessible to one another. The only thing that AIXIc1 has access to that AIXIc2 does not are the additional bits of Ω from n to m.

As long as we make m sufficiently larger than n, we are guaranteed that the child process will never be able to corrupt the parent process's state even though the parent process has not implemented any form of privilege-level construct, whether hard privilege-levels or silent privilege-levels. In short, the parent process and the child process are operating as peers with respect to the available computational resources but the parent process is able to virtualize the child process without becoming corrupted by anticipating the behavior of the child process and moving its own resources elsewhere. Since the child process can read the parent process's source code, it can attempt to intentionally anticipate the parent's next action and force it to become corrupted. But this fails because the parent process is, in every respect, more efficient than the child process, so the parent process can (for example) simulate the child process's attempt to simulate the parent process (ad nauseum) and take the appropriate action to prevent corruption.

The significance of this idea is that we can talk about telic simulators without having to posit any special "hook" or "power" that such simulators have that their simulated environments do not have access to. Perhaps we are being simulated but this does not necessarily mean that what is simulating us has implemented hard privilege-levels (think solid walls) or silent privilege-levels (think illusions). Rather, all agents are operating on the same "hardware", so to speak, and playing by the same rules. The difference between levels of simulators can then be modeled as simply knowledge of a larger or smaller prefix of Ω. The virtue of this approach is that we can abstract away all other considerations - all differences in the "levels" of the simulation boil down to how much of Omega each level has access to.

This construct resembles the gnostic theology known as the great chain of being. Greater beings are able to (and do) dominate and rule over lower beings. This dominion is based on the varying splendor or worthiness of various kinds of beings.



We can also take this in a more naturalistic direction by looking at it from the point of view of the Kardashev scale. Suppose we compare civilization A and civilization B that are the same in all respects except that civilization A knows one more bit of Ω than does civilization B. We can say that civilization A is objectively more advanced than civilization B. As we noted earlier in this series,
[Chaitin has suggested] that knowledge of Omega could be used to characterise the level of development of human civilisation. Chaitin points out that, in the 17th-century, the mathematician Gottfried Leibniz observed that, at any particular time, we may know all the interesting mathematical theorems with proofs of up to any given size, and that this knowledge could be used to measure human progress. “Instead, I would propose that human progress—purely intellectual not moral—be measured by the number of bits of Omega that we have been able to determine,” says Chaitin.[4]
We can think of the bits of Ω as forming a kind of competency-hierarchy - "Everything you can do, I can do at least as good, or better" is true whenever I know one or more bits of Ω than you do. Thus, I can effectively enforce privilege-limits on you without the use of unconditional privilege-levels.

Hypotheticals and Parallel Universes

When pondering a decision, we often engage in a kind of hypothetical thinking - "If I turn left here, it will take me down Main Street which is a shorter route to my destination but traffic is heavy. Instead, if I go straight on Oak Street, the route is longer but less busy. I think I will go straight." In computer algorithms, this kind of thinking can be implemented with a technique called backtracking. Equivalently, we can model any backtracking problem as a non-deterministic algorithm and convert it to its deterministic equivalent using a technique called subset construction.

Backtracking systematically maps the possible solutions to a problem onto a tree. If we think of the computer as an agent, this tree can also be thought of as a decision-tree. "If I try this alternative, then the result is such-and-such. This does not match the required solution. So, I must backtrack and try the next alternative."

We can also characterize hypothetical thinking as a form of simulated parallel universes. For example, suppose I want to predict the outcome of a very complex set of events, such as, "Will North Korea go to war with the United States under such-and-such conditions?" One way to approach this problem would be to build a toy model of the entire population of each country and simulate the behavior of the populations under varying conditions. This is, at present, completely infeasible but it is possible in principle. Such questions are too complex to be answered by shortcut methods, that is, by aggregated models (population dynamics). If we imagine expanding the complexity of the questions we are asking in both depth (resolution) and breadth (expanse), we can imagine reaching a point where we want to be able to simulate an entire planet or, when we begin to colonize space, even larger scales. Such simulations of hypothetical futures would be so rich in detail that they would be worlds in their own right. Nevertheless, being rooted in the physics of our spacetime, they would remain purely hypothetical.

If a "greater being" simulates lower beings than itself, this is a bit like encapsulating these lower beings within a subset of the wider set of possible universes. When a backtracking algorithm searches for a solution, it does so by pruning the tree of possible solutions, leaving fewer and fewer possibilities out of the set of all possibilities. In fact, we can imagine a parallel backtracking algorithm that spawns lower-privilege instances of itself to explore the tree of possibilities - when these instances find what the higher-privilege process is looking for, they are no longer needed and can be terminated. The higher-privilege process does not have to worry about being corrupted by the behavior of its child-processes for the reasons we gave above, even if it is inhabiting the same parallel computer without hard privilege-limits.

Conclusion

For many people, the term "Simulation Hypothesis" is almost synonymous with a Matrix-like world. The point or purpose of the simulation would obviously be to allow the simulator(s) to imprison those who are being simulated. This imprisonment is enforced through deception and/or the power to impose suffering and pain.

The thesis of this post is that it is possible to understand the world as being a simulation without hard privilege-levels. It is not necessary to view the simulator as a personal being like humans, even if it is telic (has some goal or end that it is searching for). The reasons we suspect that the Universe may have a virtualized or "layered simulation" structure are mathematical, not the cosmic horror of being trapped in a prison of illusions. A backtracking algorithm is the discrete equivalent of the quantum path integral. From an information-theoretic perspective, one plausible explanation for the puzzling phenomenon of the path-integral may be that the Universe, at root, is capable of simulating all possible paths of a particle but, through a process similar to search-tree-pruning in backtracking algorithms, the "calculation" of the infinite set of possible paths is finite. This would allay Feynman's famous misgiving:
It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities. - The Relation of Mathematics to Physics
Next: Part 19, Virtualization cont'd

---

1. YouTube - Isaac Asimov Memorial Debate, Is the Universe a Simulation?

2. Reflective Oracles: A Foundation for Classical Game Theory 

3. AIXIc1 can choose to refrain from playing a game it is not sure to win. Also note that we have simply assumed some suitable solution to the theoretical problem of mixing AIXI with game theory.

4. Randomness & Complexity, from Leibniz to Chaitin, "God's Number" by Marcus Chown; Cristian Calude ed.

Tuesday, August 15, 2017

Notes on a Cosmology - Part 17, Cracks in the Standard Cosmology

Before we continue further down the rabbit-hole of the Simulation Hypothesis, we need to stop and talk about the standard cosmology. In most contexts, "standard cosmology" is synonymous with the Big Bang. The Big Bang can certainly be criticized on many different accounts - and bolstered on others - but we are after bigger prey in this post.

Cosmology is, originally, a subject of metaphysics and should be understood as particularly a subject of ontology. Metaphysics can be defined several different ways but I will define it this way: metaphysics is that part of philosophy that has to do with deciding on a framework for answering questions about the nature of reality. For example, is the Universe (that is, the observable world or just the world) infinitely old or finitely old? Ontology is the part of philosophy that is concerned with deciding what has existence and what does not, and how they are related. For example, do numbers exist? Do quantum particles exist when we are not observing them?

Questions of metaphysics are, by their very nature, not a part of science. Otherwise, we would refer to them as "physics" or just "science". The same is true of ontology. The key is to realize that questions of metaphysics can easily be dressed up as questions of science. Some people believe that the Big Bang proves the age of the world. In fact, no scientific theory can tell us the age of the world. The question of the age of the world is a metaphysical question. The world is either finitely old or it is infinitely old. No scientific test could ever decide this question. Thus, you must choose one or the other alternative as the framework in which you will do science.

To see why this is the case, imagine that the world will eventually stop expanding and collapse back to the singularity from which it began - this is sometimes called the Big Crunch or another clever name. It is conceivable that there was a Big Crunch before the world we know began, and that after this world ends in a Big Crunch, there will be another Big Bang immediately thereafter. The following image depicts such a series of expansions and contractions (time moving downward):

Credit: Infinity and the Mind, Rudy Rucker[1]
One might argue that some theories show that expansion will never stop but the preference for an ever-expanding Universe over an expanding and collapsing Universe can never be based on empirical evidence. In short, we cannot prove by any scientific means that the age of the Universe is finite or infinite. Likewise, we cannot prove by any scientific means that the Universe will go on forever, or that it will stop. Such questions are strictly beyond the purview of science and lie in the metaphysical realm of cosmology.

There are many other questions that might seem to be in the realm of science but are properly metaphysical questions. For example, does a quantum particle exist when we are not observing it? This is not a question of science because science is the study of the observable world. By definition, a quantum particle that is not being observed is not part of the observable world. The question of whether it exists is in the same philosophical category as the question of the existence of numbers. Whichever you choose to believe (that it exists, or it does not), your choice is not the result of science but, rather, is a precondition to how you do science.

The preceding remarks have been a preface to specific examples of cracks in the standard cosmology. Just because cosmology is metaphysics does not mean that all cosmological positions are created equal. We require of a cosmology that it seem to us to be correct, that is, true-as-such. Of course, we are not talking about the kind of truth that can be proven, either by logic or evidence. Rather, we are talking about a weaker form of truth, the kind that appeals to that undefinable quality of human reason that we call intuition or aesthetic sense. Our preference for such cosmologies is not arbitrary. First, Nature manifestly prefers symmetries and this agrees with our aesthetic sense. Second, by the principle of parsimony, we prefer simpler (more elegant) explanations to more complex explanations, all else equal. We prefer them so strongly that we have gone to a great deal of work to define exactly what we mean by "simpler" and "more complex". Third, while intuition is no guarantee of success, it has led to spectacular successes throughout the history of human science. Finally, we have no other stronger tools available to us - we are operating at the very edges of human knowledge and understanding.

New Evidence and Paradigm Shifts

In 2008, scientists at Stanford and Purdue University found a statistical anomaly in recorded radioactive decay rates for several specific radioactive elements[2]. These are not the kind of data that can be easily waved away - the statistical anomalies are present in data gathered at many points across the Earth, across a span of years, and with some of the most highly calibrated equipment on the planet. The discovery sparked off a minor storm of controversy in the scientific community[3], with some physicists bolstering the original findings and others concluding that the whole thing is a gigantic misunderstanding[4]. From the original paper:
Unexplained periodic fluctuations in the decay rates of 32Si and 226Ra have been reported by groups at Brookhaven National Laboratory (32Si), and at the Physikalisch-Technische Bundesandstalt in Germany (226Ra). We show from an analysis of the raw data in these experiments that the observed fluctuations are strongly correlated in time, not only with each other, but also with the distance between the Earth and the Sun. Some implications of these results are also discussed, including the suggestion that discrepancies in published half-life determinations for these and other nuclides may be attributable in part to differences in solar activity during the course of the various experiments, or to seasonal variations in fundamental constants.
The kerfuffle is over the claim that the radioactive decay rates of these elements are not constant and "may be attributable in part to differences in solar activity during the course of the various experiments, or to seasonal variations in fundamental constants." Either possibility would deal a fatal blow to the foundations of the modern theory of radioactivity. In the status quo theory, radioactivity is solely a function of the internal configuration of an element - its mass number and the resultant arrangement of the nucleus and electron shells. If something outside of the element itself can influence the radioactive decay rate - some kind of influence from the Sun, for example - then the theory of radioactivity would have to be rewritten from the ground up to incorporate this new variable, whatever it is.

The theory of radioactivity is very old - as scientific theories go - and well-established. It works extremely well. That is, scientists are able to make all sorts of correct calculations using the theory. For these reasons, atomic physicists are naturally highly reluctant to scrap the theory on the first sign of weakness. But this is precisely the price of a rigorous commitment to the classical discipline of the scientific method - when a scientific theory does not work (does not match the observed phenomena), it is thrown on the scrap-heap and a new theory that does work is put in its place. Ever since the universal adoption of the scientific method, it has often been harder to implement this discipline in practice than it is to explain it in theory.[5]

This topic has been written about extensively - it is termed the problem of paradigm shift. Many reasons can given for why it happens. Established scientists and professors are reluctant to see a lifetime's worth of work consigned to the trash heap and are liable to side with the minority report even after overwhelming evidence against the obsolete theory has been accumulated. Scientific specialization also plays a role in fracturing the kind of interdisciplinary thinking that is required for breaking standing paradigms and progressing towards more holistic ways of thinking.

Quantum Physics or Relativity Theory - Which One is Correct?

One of the most important cracks in modern cosmology is the reconciliation of the quantum and relativistic theories. Quantum mechanics and relativity theory are, ultimately, incompatible, a point that Einstein himself realized[6]. Brian Greene wrote about this conflict and how research in superstring theory has been driven by the realization on the part of many physicists that this fundamental cosmological problem has to be addressed before there can be a unified field theory:
The incompatibility between general relativity and quantum mechanics becomes apparent only in a rather esoteric realm of the universe. For this reason you might well ask whether it's worth worrying about. In fact, the physics community does not speak with a unified voice when addressing this issue. There are those physicists who are willing to note the problem, but happily go about using quantum mechanics and general relativity for problems whose typical lengths far exceed the Planck length, as their research requires. There are other physicists, however, who are deeply unsettled by the fact that the two foundational pillars of physics as we know it are at their core fundamentally incompatible, regardless of the ultra-microscopic distances that must be probed to expose the problem. The incompatibility, they argue, points to an essential flaw in our understanding of the physical universe. This opinion rests on an unprovable but profoundly felt view that the universe, if understood at its deepest and most elementary level, can be described by a logically sound theory whose parts are harmoniously united. And surely, regardless of how central this incompatibility is to their own research, most physicists find it hard to believe that, at rock bottom, our deepest theoretical understanding of the universe will be composed of a mathematically inconsistent patchwork of two powerful yet conflicting explanatory frameworks. Physicists have made numerous attempts at modifying either general relativity or quantum mechanics in some manner so as to avoid the conflict, but the attempts, although often bold and ingenious, have met with failure after failure. That is, until the discovery of superstring theory.[7]
Greene goes on to make the case that superstring theory will one day be able to unite quantum mechanics and relativity theory, a feat that superstring theory has still not achieved.

Naturally, physicists tend to look at the cracks in cosmology from the perspective of physics. However, modern physics is intricately married to higher mathematics and the cracks in modern mathematics go very deep, as we explored in Part 13. By extension, the cracks in cosmology are much deeper than many physicists realize. Gregory Chaitin explains the connection between uncomputable real numbers, the continuum and problems in modern physics[8]:
How do you cover all the computable reals? Well, remember that list of all the computable reals that we just diagonalized over to get Turing's uncomputable real? This time let's cover the first computable real with an interval of size ε/2, let's cover the second computable real with an interval of size ε/4, and in general we'll cover the Nth computable real with an interval of size ε/2N. The total length of all these intervals (which can conceivably overlap or fall partially outside the unit interval from 0 to 1), is exactly equal to ε, which can be made as small as we wish! In other words, there are arbitrarily small coverings, and the computable reals are therefore a set of measure zero, they have zero probability, they constitute an infinitesimal fraction of all the reals between 0 and 1. So if you pick a real at random between 0 and 1, with a uniform distribution of probability, it is infinitely unlikely, though possible, that you will get a computable real.
Uncomputable reals are not the exception, they are the majority! The individually accessible or nameable reals are also a set of measure zero. Most reals are un-nameable, with probability one...
So if most individual reals will forever escape us, why should we believe in them? Well, you will say, because they have a pretty structure and are a nice theory, a nice game to play, with which I certainly agree, and also because they have important practical applications, they are needed in physics. Well, perhaps not! Perhaps physics can give up infinite precision reals! How? Why should physicists want to do that?
There are actually many reasons for being skeptical about the reals, in classical physics, in quantum physics, and particularly in more speculative contemporary efforts to cobble together a theory of black holes and quantum gravity. 
First of all, as my late colleague the physicist Rolf Landauer used to remind me, no physical measurement has ever achieved more than a small number of digits of precision, not more than, say, 15 or 20 digits at most, and such high-precision experiments are rare masterpieces of the experimenter's art and not at all easy to achieve. 
This is only a practical limitation in classical physics. But in quantum physics it is a consequence of the Heisenberg uncertainty principle and wave-particle duality (de Broglie). According to quantum theory, the more accurately you try to measure something, the smaller the length scales you are trying to explore, the higher the energy you need (the formula describing this involves Planck's constant). That's why it is getting more and more expensive to build particle accelerators like the one at CERN and at Fermilab, and governments are running out of money to fund high-energy physics, leading to a paucity of new experimental data to inspire theoreticians. 
... 
So perhaps continuity is an illusion, perhaps everything is really discrete. There is another argument against the continuum if you go down to what is called the Planck scale. At distances that extremely short our current physics breaks down because spontaneous fluctuations in the quantum vacuum should produce mini-black holes that completely tear spacetime apart. And that is not at all what we see happening around us. So perhaps distances that small do not exist.
... 
Whether or not quantum computers ever become practical, the workers in this highly popular field have clearly established that it is illuminating to study sub-atomic quantum systems in terms of how they process qubits of quantum information and how they perform computation with these qubits. These notions have shed completely new light on the behavior of quantum mechanical systems.
Furthermore, when dealing with complex systems such as those that occur in biology, thinking about information processing is also crucial. As I believe Seth Lloyd said, the most important thing in understanding a complex system is to determine how it represents information and how it processes that information, i.e., what kinds of computations are performed. 
And how about the entire universe, can it be considered to be a computer? Yes, it certainly can, it is constantly computing its future state from its current state, it's constantly computing its own time-evolution! And as I believe Tom Toffoli pointed out, actual computers like your PC just hitch a ride on this universal computation. 
[end quote]
These are questions that go far beyond anything that can be answered with empirical evidence. Is the Universe continuous or is it discrete? Quantum physics says "both." Relativity theory says "continuous, but you can get away with treating it as discrete under some conditions." The limits of mathematics tells us that, unless the Universe utilizes an infinite amount of information in every volume of space (however small), space cannot be smooth in the sense of a one-to-one mapping between R3 and physical space. Cosmology tells us that empirical evidence cannot decide these questions - whether the Universe utilizes infinite or finite information to describe space is a question of metaphysics, not math or physics.

Dark Matter and Dark Energy

From Wikipedia,
[Dark Matter] does not emit or interact with electromagnetic radiation, such as light, and is thus invisible to the entire electromagnetic spectrum. Although dark matter has not been directly observed, its existence and properties are inferred from its gravitational effects such as the motions of visible matter, gravitational lensing, its influence on the universe's large-scale structure, on galaxies, and its effects on the cosmic microwave background.
Prior to the development of relativistic physics, aether theories were proposed to explain light-waves and electromagnetic waves. The physical intuition of a luminiferous aether is straightforward - light is the rippling or waving of an aetheric medium in exactly the same way that mechanical waves in air or water are the rippling or waving of the media of air or water, respectively. If this is the case, then we can translate the well-developed mathematics of mechanical waves from the theory of mechanics to the theory of electromagnetism as a way to unify light, electromagnetism and - ideally - gravity.

Relativity theory essentially banishes the aether, making light a wave that can be thought of as the fundamental metric of space and time. Light is a wave, but there is no medium through which this wave is travelling. Rather, the light wave (that is, its speed) defines distances in space and time. The result is that distances in space and time become relative to the speed of light. The mathematics of these ideas did not originate with Einstein - Hermann Minkowski developed most of the mathematics that we know today as relativistic spacetime. Minkowskian spacetime, in turn, can be thought of as an application of non-Euclidean geometry to physics.

After the development and refinement of relativistic physics, earlier aether theories came to be scorned as an example of inventing physical entities for the purpose of facilitating a mathematical theory. While it would be quite nice to be able to repurpose the wave equations of mechanics to describing the wave phenomena of light and electromagnetism, this is just a theoretical convenience. The job of science is to explain the phenomena as they are not as we would wish them to be. The existence of a luminiferous aether was a reasonable hypothesis prior to the development of relativistic physics but its rejection was well-justified after a better model emerged.

Dark Matter and Dark Energy are hypothetical states of matter with the added complication that their existence is extremely difficult to establish through any physical experiment. By construction, dark matter can only be inferred through its gravitational effects, leaving little or no room for laboratory work. Even worse, dark matter is posited to make up as much as 95% of the matter in the Universe. In short, it is difficult to see how dark matter is anything more than a just-so hypothesis whose purpose is to salvage a broken theory. From PlasmaCosmology.net:
Within the limited confines of our own backyard, the Solar System, existing gravitational models seem to be holding-up. We have succeeded in sending probes to neighbouring planets ... the Huygens mission recently scored a spectacular success -- landing on Titan, a moon of Saturn, despite unexpected atmospheric conditions. 
It should be noted, however, that [gravity] models begin to break down when we look further [afield]. Gravity, of course, is generally described as a property of mass. The trouble is that we have not discovered enough mass in our own galaxy, The Milky Way, to account for its fortunate tendency not to disintegrate.
The existence of mysterious Dark Matter is hypothesised to account for this shortfall in mass [among other things]... Its existence is only inferred on the basis that [gravity] models 'must be' correct. The alternatives raise too many uncomfortable questions!
Dark Matter is no small kludge factor -- it is alleged to account for between 20% to 99% of the universe, depending on which accounts you read! This has lead to further problems in relation to expansion models, and another hypothetical, Dark Energy, has been invented to overcome these. In summation, Dark Matter and Dark Energy add up to the blank cheques that postpone the falsification of bankrupt theories.
Gravity, Causality and Black Holes

Newton's formulation of the law of gravitation is non-causal. Even though the law enables us to calculate the magnitude of the gravitational force between two bodies, it does not tell us what causes this force. In itself, this is no fault - sometimes, the most that science can do is tell us how one variable correlates to another variable, without knowing why. In electronic systems, there is a similar kind of examination called characterization of an electronic component or circuit. We can characterize a circuit as a "black box", meaning, we do not know (or maybe we know, but we don't care) what the internal circuit looks like. All we care about is the circuit's response to varying input stimuli.

Einstein's general theory of relativity connects space and matter in such a way that the presence of matter alters the curvature of space. This change in the curvature of space is sometimes taken to be the cause of gravity. However, this is a mistake of reasoning. Einstein's theory tells us how matter, space (and acceleration) are related, but it still does not tell us why. Like solving a triangle, if we know some information about a gravitational system, we can calculate other information for which we do not have measurements. But this does not give us a causal theory that utilizes physical reasoning in the way that, say, Galileo's derivation of the law of inertia did.

One of the things that we can easily notice from an information-theoretic perspective that may be harder to see from other approaches is that the action of gravity - which is obviously real - superficially contradicts the second law of thermodynamics. If we squint and imagine space as a "clumpy gas/dust cloud", the second law of thermodynamics (also known as the law of entropy) dictates that this gas and dust will eventually spread into an evenly distributed gas of constant pressure and temperature. This is jokingly referred to as the heat death of the Universe.

The fact that we observe clumps of matter shows that there is something that is counteracting (though, obviously, not contradicting) the second law of thermodynamics. A refrigerator is an example of a heat engine that can locally counteract the effect of the second law of thermodynamics, cooling one region of space by expelling heat into the surrounding environment. While spontaneous formation of a refrigerator is statistically improbable, it is, of course, not impossible. The human body, for example, regulates its internal temperature using cellular processes that are physically equivalent to a refrigerator. If you believe that the human body evolved, then you should not find it impossible to believe that the Universe has some mechanism by which the second law of thermodynamics is counteracted on a cosmological scale, resulting in gravity and the "clumping effect" that gravity has on matter in space.

From an information-theoretic perspective, a refrigerator is able to reverse the natural progression of entropy because it implements a "micro-mind." What I mean by this is that every heat engine can be thought of as a weakened version of Maxwell's demon. A reversible Turing machine can be thought of as an approximation of Maxwell's demon. In short, the cycle of any heat engine can be idealized as a reversible Turing machine operating a Maxwell's demon trap-door. This is one way of stating Landauer's principle. Modern cosmological theories have begun working information theory into the large-scale structure of the Universe. For example, see Hawking's work on black-hole radiation. But this is a "bolt-on" approach to information theory, that is, it is trying to shoehorn information theory into a pre-existing physical theory.

The question, from the perspective of an information-based cosmology, is: where is the refrigerator compressor? I mean this metaphorically, of course, but the point stands - there is nothing stopping us from modeling the large-scale structure of the Universe as a Maxwell's demon trap-door chamber where we are on the cold side. It can be argued that, if you squint in just the right way, Hawking's black-hole radiation theory is compatible with our refrigerator-model.

Our Star

There is a plethora of cosmological alternatives to the Big Bang theory. One alternative that I find particularly interesting is Plasma Cosmology (PC). PC holds that the gravitational force is not the dominant force in the Universe at large scales. Rather, PC holds that the electromagnetic force dominates at very large scales and its effects are not properly accounted for in the standard model of the solar system. The Electric Universe (EU) theory extends the PC theory by positing that the energy emitted from the Sun is almost entirely electromagnetic in origin and that there is no fusion occurring in the Sun's core, among other things.

The PC/EU theory makes quick work of some of the most puzzling features of our solar system. It is well-known that sunspots are much colder than the Sun's photosphere, even though they open into the Sun, thus exposing the Sun's ostensibly hotter, lower layers to external view. The EU theory holds that sunspots are actually inflows of charged particles into the Sun's interior. They form circular structures because the plasma is flowing in a plasma sheath, which creates a structure not unlike an insulated wire stretching through space, invisible to the naked eye (and other instruments).

The temperature of the Sun's corona measures in the millions of degrees, while the surface temperature of the photosphere is several thousands of degrees. This is an extraordinary phenomenon - how does it happen that the hot Sun is heating its surrounding atmosphere to a much higher temperature than itself? Imagine pulling an iron cannonball from a furnace at very high temperature. Surrounded by room-temperature air, would you expect the air surrounding the cannonball to ever become hotter than the cannonball itself? Of course not. The mainstream solar theory has no satisfactory explanation of this phenomenon. But the PC theory can explain this phenomenon as a plasma double-layer.

The EU theory can explain the planar orientation of the planetary orbits around the Sun. By modeling the Sun as a point charge moving through space, the Sun sets up a magnetic field around itself and this magnetic field plays a role - by interacting with the magnetic fields of the planets - in favoring orbits in the ecliptic plane. Comets, being faster bodies with more eccentric orbits, orbit the Sun more symmetrically (that is, symmetrical with respect to the angular distribution of their orbits around the Sun).

Many other features of the solar system have natural explanations under the PC/EU theories, including planetary canyons and ridges on bodies with no water. The theory that there was once water on these bodies fails when the shapes of the canyons and ridges are taken into account - they are not compatible with a hydrological cycle because there is no consistent downward gradient. Lunar cratering, Olympus Mons and many other features with complex explanations in the standard theory have natural explanations under a PC/EU theory.

None of this is to say that the PC/EU theory is proven. Rather, my purpose in mentioning these alternatives to the standard theory of our solar system is to point out that it is possible that modern cosmological theory has become myopic, focusing on one particular aspect of physics while neglecting other aspects of physics. As we quoted Chaitin in Part 14,
For any ... scientific ... facts, there is always a theory that is exactly as complicated, exactly the same size in bits, as the facts themselves. [This] doesn’t enable us to distinguish between what can be comprehended and what cannot, because there is always a theory that is as complicated as what it explains. A theory, an explanation, is only successful to the extent to which it compresses the number of bits in the facts into a much smaller number of bits of theory. Understanding is compression, comprehension is compression! That’s how we can tell the difference between real theories and ad hoc theories.
The more we cobble onto existing cosmological theory, the greater risk we are running that we are just tailoring our theory to handle more and more special cases without stopping to take stock and assess whether rewriting our theory from the ground up could result in a globally more "compressed" theory.

A Grand Unified Theory

As we saw in Part 13, there is no physical theory of everything. I would wager that, if you were to survey, say, a thousand of the world's top physicists, the majority of them would respond that they believe it is possible that a physical theory of everything could be found. The impetus behind much of modern physics is the attempt to unify various parts of physics into a single theory. This single theory has gone by a variety of names, including Grand Unified Theory (GUT), Unified Field Theory, and others. Superstring theory, for example, is heavily motivated by the desire to unify quantum physics and relativistic physics in a single head.

The impossibility of a theory of everything does not, of course, exclude the possibility of grand unifications - these have already happened several times in the history of physics. But it is crucial to keep in mind that grand unifications are local to the theories being unified. We must keep in mind that every physical theory has some domain to which it applies - no theory of physics will explain the aesthetics of situational comedy, for example.

Conclusion

We will not be wading into any of the debates covered in this post in any depth. In software engineering, the term code smells is used to refer to code that seems to work in most or all cases but which has the appearance of poor design and is, therefore, suspected to contain hidden bugs. The standard cosmology has "cosmology smells." That is, it exhibits multiple symptoms of deep and hidden flaws in its foundations.

This doesn't make the standard cosmology useless or bad - it almost always works correctly. In fact, the aspects of scientific theory where it breaks down are so obscure that many specialists will never encounter them in actual laboratory work. But that doesn't matter from the point of view of cosmology proper, because cosmology is a subject of metaphysics. For the purposes of metaphysics, the only interesting aspects of the standard, scientific cosmological theory are those aspects that don't work, however, obscure they might be. The fact that they don't work is telling us something very important: sooner or later, the standing theory will be resolved with the empirical evidence, or it will be scrapped.

It has been the habit of established scientific schools of thought throughout history to view the status quo theory as "all but a closed canon." For hundreds of years, we have been on the verge of a grand unified theory that will close the textbooks on new physical theory once and for all. Instead, what has actually happened is that the theory of physics has been repeatedly rewritten from the ground up since the time of Galileo down to today.

In place of the standard cosmology - that is, Big Bang theory - we will be positing a cosmology that organizes the Universe, at all scales, around information. Economizing information on the input to a universal function, U, automatically results in the universal prior that we discussed in Part 9. We live in a Universe in which exact measurements can only be described by a mathematics that admits both wave-like and particle-like properties. If we believe that information is economized (or even conserved), this means that we have to apply the universal prior to the Universe. We live in a quantum Universe whose prior (that is, whose prior probability distribution without empirical measurement) is identical to the universal prior. In addition, we live in a Universe that is "observationally indistinguishable from a giant quantum computer." We have proposed the term quantum monad to describe the causal structure of a cosmology that incorporates these two major features.

Next: Part 18, Virtualization

---

1. Infinity and the Mind, Rudy Rucker

2. Evidence for Correlations Between Nuclear Decay Rates and Earth-Sun Distance, [PDF]

3. Net Advance of Physics: Variability of Nuclear Decay Rates - compendium of papers related to the subject

4.  Evidence against correlations between nuclear decay rates and Earth-Sun distance

5. Anti-scientific practices have too frequently flown under the radar of scientific method. One particularly remarkable and grotesque example is the Tuskegee Syphilis Experiment.

6. Einstein-Podolsky-Rosen paradox

7. The Elegant Universe, p. 63

8. Epistemology as Information Theory: From Leibniz to Ω

Friday, August 11, 2017

Notes on a Cosmology - Part 16, The Quantum Monad

Gottfried Leibniz was one of Europe's most remarkable thinkers around the turn of the 18th century. Leibniz is one of the fathers of the calculus, along with Isaac Newton. Leibniz's mathematical notation for calculus has survived to the present day. Leibniz made many other important contributions to Western thought.

One of Leibniz's later works, The Monadology, contains an almost aphoristic condensation of Leibniz's lifetime of thought. Leibniz organizes his ideas around an idea he terms the monad (it is sometimes capitalized, i.e. Monad). In this post, we will be plundering Leibniz's ideas at will. The cosmology I am proposing here is a framework for physical reasoning - it is not a scientific method but it is intended to form a foundation on which a robust scientific method can be built.

In modern higher mathematics, the field of group theory has explored abstractions of algebra. Some of these abstract algebras are referred to as groups and the field takes its name from these groups. Group theory derives different kinds of algebras by individually relaxing the constraints on ordinary algebra, such as that an operation should be commutative or associative, and so on. In this way, group theory treats the properties of algebra almost like the properties of physical substances and categorizes the different kinds of mathematical structures that arise from combinations of these properties accordingly.

The property of closure, for example, holds that a set is closed under a given operation if applying that operation to elements in the set always yields another element of that same set. For example, the positive whole numbers are said to be closed under addition and multiplication because any two positive whole numbers can be added or multiplied, yielding another positive whole number. But the positive whole numbers are not closed under subtraction or division because these operations can yield negative or rational numbers. Closure is a general property of an algebra, however, and is therefore not restricted to the ordinary operations of addition, multiplication, and so on.

A common example of an abstract closure is the Rubik's cube. Twisting a face can be considered an operation on the cube. Each twist takes the cube from one state in its state-space to another state in its state-space. Thus, the set of all Rubik's cube states and the operation of twisting the faces form an algebraic closure. This property is important because if we are dealing with an algebraic closure, we do not have to worry about special cases, such as division-by-zero, for example. We can say that algebraic closures are very well-behaved mathematical objects, making them easy to reason about.

When we reason about the world, we alternate between treating the world as a scattered collection of unrelated particulars, on the one hand, and treating the world as a unitary, indivisible whole, on the other hand. In philosophy, the tension between these two ways of thinking about the world is termed the problem of the one and the many. Let us treat mathematical closures as a thinking tool and apply this tool to the world, per se. For example, when I mix two substances in a chemistry laboratory, whatever the result, it is again another substance. We can think of the world by analogy to a mathematical closure; we will just call it a closure, for short. The world-as-closures gives us a way to hold the tension between the one and the many without giving up logical consistency. The world is a duality. Viewed in one way, the world is one substance that is related to itself by many actions. Viewed in another, equally valid, way, the world is many substances that are related to each other by one action. But no matter which we look at the world, it is a closure.

In mathematics, the motivation for studying closures is that they are well-behaved - the motivation is, ultimately, aesthetic. In physical reasoning, however, this is not our primary motivation - we are constrained in our choice of aesthetic by the facts of the physical world itself. But as we already pointed out in the case of a chemistry experiment, the physical world does indeed behave like a closure, in the broadest sense.

Let's consider again the idea of digital physics, which we introduced in Part 14. One of the most common digital models used in digital physics is called the cellular automaton (CA). Easily the most widely studied CA is John Conway's Game of Life. It has been proved that the Game of Life is Turing universal, meaning, it is possible to implement a universal Turing machine, U, in a properly initialized Game of Life.

The video linked below shows an animation of a massive field of Game of Life that implements, on top of itself, another Game of Life. That is, the field is initialized to simulate the rules of the Game of Life, within the Game of Life itself.


This is a kind of fractal closure property. It is similar to software virtualization, where we say that we have virtualized the hardware environment by completely simulating it in software. This fractal closure property should not be confused with an attempt to solve what can be called "the substrate problem" in digital physics - if the world is made out of software, where is the hardware that it is running on?

The following video is fairly technical but provides an easy-to-follow introduction to the general activity that is occurring inside of a digital computer when it is computing, specifically, when it is adding.


The thesis of digital physics is this: what is happening, physically, when we do digital logic in a computer is the same as what is happening in the above simulation of Life-in-Life - we are merely observing the underlying rules of the physical world at a larger scale than their native scale. One of the motivating factors for reasoning about the physical world this way is that it clearly forms a closure - the world is made of information and evolves according to information transforms. We can think of the world as many individual pieces of information and a single transform. Or, equivalently, we can think of the world as a single piece of information (pattern, message) that is operated on by many transforms, giving rise to the many phenomena we observe.

I am not aware of a name for this principle in digital physics but I propose that it be named the projection principle. The projection principle arises from a principle in computation that can be called substrate-independence - I can compute with water pipes and valves, gears and levers, vacuum tubes and wires, or silicon transistors on an integrated circuit. The substrate is irrelevant. The essence of computation consists in the pattern that is moving across the susbstrate.

The problem immediately arises (as we have already seen in Part 10) that if the world is a computer, it is a quantum computer, not a classical digital computer. As Seth Lloyd explains it, "The universe is observationally indistinguishable from a giant quantum computer."[1] The quantum monad, then, is the result of combining the projection principle with Lloyd's quantum computation (QC) thesis - the Universe is indistinguishable from a giant quantum computer at every scale. Of course, the question immediately arises, "If we are in a giant quantum computer, how come we do not observe quantum effects at the macroscopic scale?" We will be addressing this question in upcoming posts.

The quantum monad is a stronger thesis than the QC thesis. Leibniz asserts, regarding the monad, that, "all simple substances or created monads ... [are], so to speak, incorporeal automatons." [2, §18] He contrasts this with corporeal bodies - "every organic body of a living being is a kind of divine machine or natural automaton" [2, §64] Between the corporeal and incorporeal, there is a duality that correlates well with the duality of quantum physics - that is, the duality between particles and waves. The key, here, is that Leibniz identifies monads as automata. An automaton - whether natural or artificial - is subject to complete description by a set of laws. Thus, the monad is an incorporeal entity that strictly obeys a finite set of laws.

This brings us full circle back to the category of logic as it is the expression of law itself. We began with the digital physics thesis, which derives from the projection principle. We then derived from the projection principle the idea of the quantum monad. The quantum monad, in turn, can be seen as nothing more than the strictest application of logic to phenomena that are directly observed as well as to phenomena that are only indirectly observed (inferential). Thus, the quantum monad is exactly equivalent to the evolution of quantum causality.

From the quantum monad, we may infer that you and I, and everything around us are all components of a massive, analog computation. This computation follows the projection principle and forms a physical closure - every combination of substance gives rise to substance of such a form that it again admits to re-combination by the set of combinations that were available originally. To refer to this computation as "a simulation" may be jarring, at first, but it actually fails to capture the true immensity of the implications. Not only might the Universe be stranger than we can suppose, we may be able to work out the strangeness of the Universe to a far greater degree than any of our ancestors had ever dared to suppose, simply by working out the logical implications of the universal prior in a quantum computation with iron rigor.

The PMM thought-experiment shows that, in a simulated world, there is no such thing as "weird" or "spooky". In fact, quantum mechanics is relatively boring in a simulated world. A simulated physics would contain para-consistent spacetimes. For example, when you walk through a door in one direction, it connects rooms A and B, but when you walk through it in the other direction, it connects rooms B and C. We can always make such a spacetime consistent by adding dimensions but that is beside the point - arbitrarily high dimensionality is the rule in computation, not the exception.

As we mentioned in an earlier post, physical properties can be understood as degenerate forms of the ideal set of all possible properties. We can imagine simulation-builders imposing physics-like properties because they are useful. Locality, solidity (mutual-exclusion), flatness, linearity, massiveness, continuity, and so on, are features that are useful for imposing inescapable resource-bounds. In short, any environment that is constructed in such a way as to impose scarcity upon operators in that environment will have to impose properties very much like those that are familiar to us from careful observation of physical materials. In turn, imposing scarcity upon operators in a virtual environment is crucial for any sort of game of incomplete information because privacy can only be guaranteed up to resource bounds in any environment where all events are public record.[3]

Arbitrary choice of a set of properties to impose upon operators in a virtual environment is almost certain to leave what hackers refer to as "attack surface" - logical holes in the security design that allow cheaters to take undue advantage of resources in the virtual environment. In MMOG's, this shows up as cheaters giving their characters unlimited resources such as health, weapons, ammunition, and so on, that is, granting themselves privileges that no fair participants have access to. Thus, truly robust virtual environments that support games of partial information must choose rule sets that have provable properties, which brings us right back to group theory because it utilizes abstractions that are easy to reason about, such as the closure property.

The Perl programming language has a package called Quantum::Superposition, originally authored by Damian Conway. This package enables a set of abstractions that are the discrete equivalent of the behavior of quantum superpositions in the continuous domain. The Prolog language implements a form of non-determinism (back-tracking search) that can be thought of as the discrete-choice equivalent of the quantum path integral in continuous path-space. These equivalences between digital programming paradigms and quantum systems are just a few examples of an important principle - if the "real" laws of the Universe are computational in nature, then geometric or spacetime-based rules are the wrong way to analyze the world.

While spacetime laws are truly fundamental, the reason they are fundamental is because they are mathematically natural, not the other way around. This explains the apparent paradox of Minkowski spacetime. Minkowski spacetime seems weird to us but it's mathematically natural. Thus, it is an efficient geometry relative to rectilinear, Euclidean space. Euclidean space, in turn, is suitable for human scale and, thus, it is how our brain constructs or interprets local material reality. When we examine questions of indirect phenomena, such as, "Where is the particle?" the true answer is that there is no "where" at all. There is only the particle's state. In the PMM, for example, we may examine the geometry of the VR Environment (the rendered video and audio), versus the physical geometry of the video and audio state within the computers that make up the Shared World State. While these are not completely un-correlated, whatever correlation exists is highly non-linear and a function of arbitrary resource limitations that fluctuate chaotically based on local conditions within the simulating computers themselves. In short, it doesn't matter where the state describing a particular polygon[4] within the simulated environment is "really" located within the geometry of the simulating computer's memory banks - these two geometries might as well be random with respect to one another.

Because we are visually-oriented beings and because the sense of sight is arguably the most exact of the five senses, we tend to attribute great importance to space and time. Space and time are, arguably, the two greatest organizing principles of modern scientific thought. But in a quantum monadic Universe, space and time are not the highest principle of organization. In fact, space and time sit low on the totem-pole. In the following image, I have arranged the organizing principles of the quantum monad - we will only be exploring part of the hierarchy in this post:

The material creation is the physical world, per se. Time sits above the material creation as the inexorable principle from which flows all other resource limits in the material creation. But we have placed causality above time. Causality is a synonym for law itself or just logic. The projection principle lives at the level of causality. We can see this by operating a physics simulation, say, of the aerodynamics of an airplane wing. The simulation allows us to do physically impossible things like step backwards in time, freeze time, or move the simulation forward at faster-than-real-time. The simulator contains the logic of the physical systems it simulates and, thus, it is free of the constraint of time. The idea that comprehension of a physical system can remove the time-parameter is at least as old as Lagrangian physics. But the quantum monad takes this further than a thought-experiment and asserts that it is really the case that - wherever one system comprehends (exhaustively simulates) another system - the time-parameter is rendered subservient to the causal structure. In short, time can physically run backwards (or stop, or whatever) just whenever one system fully comprehends another.[5] We will leave the categories of will and logos for a future post.

Next: Part 17, Cracks in the Standard Cosmology

---

1. The Universe as Quantum Computer, Seth Lloyd (December 17, 2013)

2. The Monadology, Gottfried Leibniz

3. This is the entire basis of public-key cryptography.

4. In computer graphics, the basic unit of 3D rendering is the polygon - typically a 3-gon (triangle) sharing its edges with other polygons in such a way that they form a closed volume. The polygons are oriented in 3D space according to their relation with each other and their overall relation to the player's viewpoint, sometimes referred to as camera-coordinates.

5. Utilizing algorithmic information theory, we can objectively define what it means for one system to comprehend another.

Wave-Particle Duality Because Why?

We know from experimental observation that particles and waves are fundamentally interchangeable and that the most basic building-blocks of ...