Thursday, October 26, 2017

How to Cull the Blockchain

After doing a bit of digging on the topic, I have been unable to find any serious, detailed proposal of how to cull the blockchain. The Bitcoin whitepaper discusses using a Merkle tree to prune transactions but this method cannot be used to cull the blockchain, as such. That is, it cannot be used to move the Genesis block forward. The Bitcoin Core client utilizes pruning methods, internally, but these methods cannot be utilized to alter the blockchain itself because they are not secured by mining.

In this post, I want to lay out a proposal for how to cull the blockchain in a tamper-proof way. Note that the concept of culling does not rely on any particular technique or algorithm - culling is a protocol-level action. An objection to a particular detail of this proposal does not invalidate the proposal itself.

First, why do we need to cull? The blockchain is 140GB and, at max utilization of block weight, could grow at a sustained rate of 100GB per year but will almost certainly continue to grow at a rate of at least 50GB per year. Storage costs are not widely thought to be a serious problem but storage costs are not the primary problem with an ever-growing blockchain, anyway. Peer-to-peer networks are generally designed for lightweight join/leave operations. Every block added to the blockchain makes connection/disconnection more costly for peer nodes which mitigates against Bitcoin's peer-to-peer nature. In addition, relaying the raw blockchain is an O(n2) bandwidth problem because every byte added to the blockchain is added to every future relay of the blockchain. In general, O(n2) is not considered scalable. The steadily increasing size of the blockchain causes other problems, as well.

The reason for starting the blockchain at the Genesis block is that anybody trying to fabricate the blockchain would have to perform more hashing than the Bitcoin network has performed during its entire existence. This is a staggering amount of computation. Almost by definition, the full hashing capacity of the Bitcoin network can only do about 1 block's worth of hashing in 10 minutes. Fabricating more than 1 or 2 blocks back from the present block would be an astounding achievement - the full blockchain presently stands at around a half-million blocks. This is beyond overkill from a security perspective.

Conceptually, the blockchain regulates updates to an implied database called the UTXO-set (Unspent Transaction Output Set). You can use Bitcoin software to build the current UTXO-set from the blockchain. The software will scan the blockchain and update the UTXO-set for every transaction that has occurred, discarding old transaction information as previous UTXOs are spent. Here is an image depicting the conceptual UTXO-set as a pie chart, each slice identified by an associated witness.



A witness is the part of an unspent output that secures it. Each witness can take one of several forms: pay-to-pubkey-hash (P2PKH), pay-to-script-hash (P2SH), pay-to-witness-public-key-hash (P2WPKH), and so on.

Each block that is added to the blockchain is an implicit update of the current UTXO-set:


This fact, combined with the fact that blocks can only be added one-at-a-time, in a well-defined order, makes Bitcoin's UTXO-set an extremely well-behaved entity. It is a discrete-state system, where the state at time tn is always just Sn.


To cull the blockchain, all we need to do is generate a snapshot of the UTXO-set at some time tsnapshot:



… make that snapshot itself part of the blockchain (playback), and then reset the Genesis block to be the block immediately prior to the snapshotted UTXO-set. This all might sound very dangerous because we are throwing out oodles of transactions and transaction-history all tangled into the parts of the blockchain we are culling. But it's not so bad. Let's look into the details.

The first thing we need to define is some kind of snapshot schedule. This is no harder to define than a coinbase mining award halving. Just set it and let it go. Every X blocks, there will be a snapshot.

Next, we need to define an encoding of the UTXO-set in a network-universal way. To do this, we list every single unspent output in the UTXO-set (transaction ID, witness and balance), sorted by transaction ID in lexicographic order:



Then we just hash this list to generate a UTXO-set snapshot hash. We chain this hash with the previous UTXO-set snapshot hash in the same way that blockchain blocks are connected in a hash chain[1]. Now, we include this hash in the block that is scheduled to kick off the snapshot. The entire flow is depicted here:



Now that the UTXO-set snapshot has been defined, we wait some number of blocks. This is designated as “consensus holdoff time1” in the drawing above. The purpose of the holdoff time is to ensure that the snapshot hash is really agreed upon by the rest of the network – the network should reject a block that was scheduled to kick off a snapshot but was mined with an incorrect UTXO-set hash. Since resetting the Genesis block has network-wide ramifications, lots of holdoff time should be given – 12,960 blocks would be about 3 months. A generous holdoff time will also provide room for emergency measures should something go drastically wrong.

Next, we begin “playback”. Playback works as follows. Assemble the snapshot UTXO-set (as originally hashed), break it into sectors X bytes in size, and require that miners include a UTXO-set sector, in order from 1-N (where N is snapshot-UTXO-set-size divided by X), with each block mined after playback begins. After N blocks, playback is finished and the entire UTXO-set is now encoded into the blockchain.

As we have described the process so far, there is an attack (it is far-fetched, but still an attack). Suppose we are part way through playback. Suppose a bad actor commanding significant mining hashrate has all of his UTXO's in the early sectors of the UTXO-set and decides to mount a hostile hardfork that will alter the remaining playback blocks and the final playback UTXO-set snapshot hash, perhaps eliminating some portion of the UTXO-set in order to deflate the total money supply, or whatever. The rest of the network could always just choose to ignore this bad actor but a well-coordinated attack might make users afraid or confused about what is happening. It is conceivable that a non-negligible part of the network could switch part way through playback onto this poisoned UTXO-set.

There is a tool that can be used to completely eliminate the possibility of such a "mid-playback fork": an all-or-nothing transform (AONT[2]). The idea is this. Before we begin playback itself, we first apply AONT to the snapshot UTXO-set, transforming it into a new file of the same size as the original UTXO-set. Now, we partition this file into N sectors, each X bytes in size. Because AONT is invertible and unkeyed, the original UTXO-set can be recovered simply by applying AONT again. Now, no one can benefit from interrupting the playback because the playback itself is meaningless until it is completed and, in addition, every sector of the playback is dependent upon the entire UTXO-set, so there is no way to "alter" the UTXO-set in such a way that some prefix of the AONT remains the same, enabling a mid-playback fork.

Once playback is complete, we wait for “consensus holdoff time2” before resetting the Genesis block. This holdoff time might be somewhat shorter, say 864 blocks (1 week), in order to allow nodes to begin culling their copies of the blockchain as soon as possible. In any case, the exact numbers given here are not important. The network should choose numbers that incorporate all the relevant factors.

Once the culling process is complete, the entire, correct UTXO-set can be reconstructed starting from the new Genesis block. A device/client that is offline for a very long time can still sync up with the network given nothing more than the UTXO-snapshot hash chain + the current blockchain.

Note that the security of the blockchain itself is never at risk during this entire process. The worst-case scenario is a soft-fork that foregoes resetting the Genesis block in the final step.

One objection that might be raised is that transmitting the UTXO-set over the blockchain only serves to bloat the blockchain itself and slow down the network. A serialized snapshot of the blockchain UTXO state should be about the same size as the internal UTXO-set representation utilized by the Bitcoin Core client. As I understand, this is less than a gigabyte in size (Correction: It is 2.8GiB as of this writing.) So, the UTXO set can be represented in 2% of the space of the full blockchain. Supposing this proportion holds and supposing the blockchain is culled once every 5 years, for example, this would allow the blockchain to be culled by as much as 500GB (suppose 100GB/year) at the cost of just 5GB in snapshot data. The savings would be 495GB, with payoff in disk space savings, connection time reduction, blockchain relay bandwidth reduction, and so on.

Note that this concept of "state playback" is inspired by hardware debug techniques. In general, we cannot simulate the hardware from clock cycle 0 (for a CPU, for example) because the simulator is just too slow. We also cannot directly observe the state inside the machine during a test. So, during a test, we will occasionally dump a snapshot of the entire state of the machine, allowing an external observation device (e.g. logic analyzer) to save that snapshot for replay. When the test fails on hardware, we load the last replay snapshot into the hardware simulator and then "step" the simulator forward to attempt to recreate the failure (in a reasonable amount of time, like a day or so). In the case of Bitcoin, we can observe the state of the blockchain (the implied UTXO-set) but we want to "forget" its past state in a tamper-proof way. Since we take it as a given that the blockchain is tamper-proof, the solution is obvious. Take a snapshot of the UTXO-set, play it back into the blockchain, and then cull the blockchain prior to the snapshot.

I would like to see this drafted into a BIP.

1) This hash could, in principle, be generated along with every block and inserted into a Merkle tree.

2) There is more than one way to define AONT - for the purposes of the present proposal, you could simply encrypt the snapshot UTXO-set using its hash as the key without loss of any required security property.

Wednesday, October 18, 2017

AlphaGo Zero

DeepMind has just released a paper describing AlphaGo Zero, a version of AlphaGo that has taught itself to play Go better than any human player and any prior version of AlphaGo, based solely on self-play. It did not utilize human historical games or any other games than its own training games. The training rate is startlingly rapid. The primary change was increasing the efficiency of the neural networks so they could learn faster through self-training.

While these results are astounding, it is still hard to see how we "get from here to there" with the DeepMind approach. While human intelligence consists of a patchwork of special-purpose modules interacting in a nebulous ether of neuronal connections, the essence of general-purpose goal-achievement (or problem-solving) is not contained within an AlphaGo style decision-tree.

In technical terms, AlphaGo Zero's (AGZ) self-training is called unsupervised learning (UL) and it is the same kind of learning that humans undergo in childhood while learning to walk, speak, stack blocks, throw a ball, and so on. Yet AGZ's self-training looks nothing like natural UL, while natural UL looks very similar across many species that are very different from one another. In short, AGZ has taught itself to play Go with UL, but it does not seem that the same framework can automatically translate to anything other than Go without a significant investment of hand-coded, domain-specific "glue code." AGZ exhibits no playfulness, curiosity or "anti-fragility" (resilience), qualities that are arguably present in all natural organisms capable of UL.

Hats off to the DeepMind team's amazing achievement. I'm still waiting for someone to pick up the challenge of building an AGI based on a computable approximation of AIXI.

Tuesday, October 10, 2017

Supercomputer: Earth

Earth, also known as Sol III, was a giant supercomputer designed to find the Ultimate Question of Life, the Universe and Everything. Designed by Deep Thought and built by the Magratheans, it was commonly mistaken for a planet, especially by the ape descendants who lived on it. It was situated far out in the uncharted backwaters of the unfashionable end of the Western Spiral Arm of the Galaxy. [Hitchhikers Wiki]
The cells in my body do not know who I am. They do not agree or disagree with my choices. They are alive, they feel things, but their domain of knowledge - so to speak - is restricted. Cells are autonomous agents, so it's not completely incorrect to say that cells make choices. Yet, my cells are not aware of my choices and cannot understand them. Relative to my mind, my cells are deterministic automatons - mere machines. Nevertheless, my cells are me. I feel what my cells feel. In turn, my cells are dependent upon me for life - without a mind to direct the body and its cells, they would all perish.

Robotics expert and AI-researcher Ben Goertzel has partnered with Hanson Robotics to found SingularityNET, a cryptocurrency-based network that will allow individual AI producers to cooperate within a global, distributed AI grid. If they get the details right, the power of this idea to fuel human progress is difficult to exaggerate - its name is well-chosen.

Demand for computation is actually unlimited. The essence of negentropy is prediction. Thus, the essential guiding principle behind all material processes (whether physical, social or economic) reduces to the problem of planning (long-range prediction). But planning - for all but the most rote tasks - is more complex than any textbook algorithm. Real planning requires general-purpose intelligence or "common sense", like that which humans possess (and animals do not).

Neurobiologists have completely mapped the neurons of the microscopic worm, C. elegans. AI hype tends to attribute superhuman abilities to AI, if not already, in the very near future. But it is crucial to keep in mind that modern AI systems do not yet possess the intelligence of this microscopic worm.


AI is nowhere close to being a solved problem. There is unlimited headroom for artificial solutions to modern problems. As these solutions begin to emerge, we will eventually cross a threshold where the capabilities of AI are strictly superhuman. The implications of this to the structure of human society are staggering. A distributed global super-intelligence like SingularityNET would, in fact, stand in the same relation to individual humans as my mind stands in relation to the individual cells in my body. Furthermore, as we begin to interact with this global super-intelligence, we will become part of it. This is the case physically, not just metaphorically - in time, the pressures of physical and economic law will compel organic integration. I do not necessarily mean that we will become cyborgs (though there will doubtless be people who go that route) but that we will become part of a tight loop: always-connected, always-on, always-interacting with the global mind.

We tend to associate "computation" with the kinds of problem-solving that are difficult for humans - calculating complex mathematical equations or performing countless instances of tedious tasks. But computation is merely the canvas on which we are painting synthetic systems for general purpose problem-solving and decision-making. In other words, we are transitioning to a new phase in the information revolution, a phase in which computation is no longer about software that can perform rote tasks that are hand-coded by specialized engineers. In this new phase, computation is going to become increasingly generalized and will begin to handle increasingly ordinary problems, problems that we typically pay humans to handle.

Consider the practice of cold-calling sales. This is amazingly inefficient. Targeting your sales allows you to reduce costs by increasing the payout (ratio of sales to calls). Today, we have junk mail, spam, targeted Google and facebook ads, and so on. But this idea of increasing payout by using better filters is no less true of, say, mining for gold. Predictions about the physical and economic world have physical ramifications. A roads department for a very large city that utilizes AI methods to predict the long-run wear & tear on roads can decrease costs because the AI's superior predictions will enable streamlined planning of maintenance, repair and rebuilding. The longer ahead of time I know that I will need resource X, the more cost-effectively I can plan that cost. Inventory is another example of this economic fact. How much of any given good should I have on hand at any given time? This is a prediction question and something that AI is ideally suited to answering both for short-run and long-run patterns.

But prediction always carries with it an ineradicable error margin. The AI may make much better predictions than humans, but sometimes it will over-predict and, other times, it will under-predict. There is a way to solve this problem - allow the AI to drive patterns with "distributed incentives".

Consider the restaurants in a large city on any given night. One night, the west end is slammed but restaurants over on the east end are having a slow night. On another night, the uptown district is flooded with customers while other areas are having a slow night. These patterns occur due to all kinds of difficult-to-predict conditions - a big rock show happened to be playing the same night as a highly anticipated football game and both venues happened to be towards one side of town. Economics tells us that whenever a resource is less demanded, its price will tend to go down (and vice-versa). It also tells us that whenever the price of a resource goes down - all else equal - demand for that resource will increase. So, an AI with access to blanket data on human behavior could drive economic patterns for the purpose of reducing economic inefficiencies. For example, when a game and a rock show are occurring on one end of town that will predictably reduce demand for food service on the other end of town, an AI could issue discounts for customers who choose to eat on the "dead" side of town. This would spread out demand for food services, increasing customer satisfaction, reducing prices, and increasing food service revenues for the dead side of town.

Of course, this post isn't about the food service industry. What is true of this one industry is true, in principle, of all industries. An AI brain that not only observes economic patterns but also drives them would be able to continually reduce economic inefficiency across all industries simultaneously with an error margin that goes asymptotically to zero.

Throughout this post, I have simply assumed that there will be the global brain. The research behind SingularityNET help us to understand why we should think there will be just one brain, not many brains in silos. Suppose you own and operate ACME IntelliCorp and you provide AI data-processing services. Suppose I want to start a new company called Omnibus Intelligence that can utilize your service to produce and sell another, unrelated data-processing service. One way to think of what is happening is that my company is using your company like a function-call in a computer. In short, my company is using your company as an API (application programming interface). SingularityNET is merely a platform that is trying to facilitate this model of computational interaction. But even if SingularityNET is a flop, the concept itself is inevitable, just like the Internet was inevitable after the development of the PC and widespread deployment of computer networks. There is and only ever will be one Internet. In exactly the same sense, there will be just one global brain.

Maybe, one day, this global brain will solve the problem of why it exists and it will tell us the answer. Maybe that answer really is 42.

Sunday, October 8, 2017

Notes on a Cosmology - Part 21, The Primum Mobile

In the last post, we argued that there are no dark worlds and that, at root, the Universe is mind or soul. This allows us really to resolve the problem raised by Feynman, that the Universe seems to be performing an infinite amount of computation to determine what is happening in every point of space and time, no matter how finely we divide it.

To see how this problem is resolved, let's start with a concept from 3D gaming called level-of-detail or LOD. Instead of storing a single 3D model of any given game object, a game that implements LOD will store several models of the same object, each model of varying detail.


When an object is very close to the viewpoint camera in the game, it will be rendered with the most detailed model. When it is very far away, it will be rendered with the lowest detail model. At medium ranges, an in-between model will be selected. In this way, a 3D game engine can manage a truly mind-boggling amount of detail. Note that the engine has on the order of 16 milliseconds to render an entire frame and the viewpoint camera might have millions of polygons (basic 3D surfaces) in view at any given time, at full detail. By selectively employing lower LOD for far away objects, the game engine can reduce the total number of polygons to be rendered in any given frame by 1000x or more, to a realistic number. The key is that the engine is able to maintain an illusion of immersive detail by reducing the detail with which it renders distant objects since the eye is unable to distinguish the difference in visual quality anyway.

Here we see finally the principle of indifference - a thinking tool we introduced in Part 1 of the series - in action. If it doesn't make a difference, there is no difference. The Universe does not really need to "exhaustively compute", it is enough to present a faithful facsimile of causality, where fidelity is defined on the basis of indifference (choice). If your choices would be the same, either way, then there is no actual need for any deeper computation.

Of course, several objections can immediately be raised. First, how is the Universe measuring our hypothetical choices? Second, how is the Universe controlling its "level of detail"? Third, what exactly are the details for which the Universe is varying its level-of-detail?

We have answered the first objection already - the Universe can and really does construct hypothetical sub-universes using virtualization. These sub-universes are replete with every detail of our own "primary" (so to speak) world. By making a complete copy of this world and varying the parameters, it is possible to test whether an individual would have made different choices if the parameters had been different.

The second objection is actually a complex question fallacy. The Universe does not have a set of "level-of-detail knobs" hidden somewhere that it is tuning. Rather, we are asserting that the fundamental structure of the mathematics of the Universe is the same as though the Universe had such a set of "level-of-detail knobs". In other words, the maths must incorporate a level-of-detail parameter that regulates "how deep" the computation goes for any given process. "But the closer we look, there is always more detail to be seen!" This is of course. The same is true of a 3D game engine or a fractal zoom, like a Mandelbrot zoom. Neither 3D games nor fractals actually contain an infinite amount of detail or require an infinite amount of computation to provide an unlimited amount of detail based on how closely the user "zooms in". More detail is served as it is demanded.

The third objection is the most difficult to answer. To answer it, let's consider the HBO series, Westworld. In this fictional world, Westworld is a kind of theme park based on the old American West. It is populated by robots of such exquisite design and fabrication that they are indistinguishable from living humans, short of mechanically tearing them apart to see what's inside.


In this theme park, the worst thing that can happen (for Westworld's revenues) is breaking the fourth wall, that is, for guests to remember that the robots are not human. The park's creator (Dr. Ford) has gone to extensive lengths to construct the artificial intelligence, the park's landscape and the storylines in such a way that guests of the park are sure to remain fully immersed in the experience, no matter how far they choose to take it (for example, shootouts).

We can define level-of-detail from the point of view of Westworld's corporate revenues - a theme park that provides an immersive experience that keeps guests fully satisfied so they remain loyal customers has created a convincing experience. If guests were to leave early or choose not to return because they were dissatisfied, this would mean that the park's models were not well-constructed - the level-of-detail had not been correctly managed.

The same techniques that can be used in a 3D game could be used just as well in a physical theme park like Westworld. You only need to deploy your most convincing and expensive robotics up-close and personal. More distant "background extras" can be built more economically. The point is that "level-of-detail" is not just a question of video-audio rendering (sensory inputs), it is semantic in nature. Thoughts and ideas can have varying levels of detail. Stories and storylines can have varying levels of detail. Individual personas can have varying levels of detail. Nothing that can potentially come to your attention as a subject of contemplation is exempt from level-of-detail - even history itself is not exempt. Thus, we must conclude that the Universe is varying all of these details and that it is capable of doing this because the Universe is, at root, mind.

We return again to the topic of theology. The root "theos" means "God" and "theology" means "the study of God". It is a broad word and encompasses the study of everything from polytheistic deities to idol-worship to Eastern mysticism to Western transcendentalism. For many people, the word "God" evokes something along the lines of: "The Big Boss who's In Charge and tells everyone What To Do and determines whether they've done a Good Job or whether they're in Big Trouble". For our purposes, this kind of thinking is useless. God - or, at least, the idea of God - is relevant to questions of cosmology by virtue of creation. It is God-the-Creator that we are interested in.

The self consists in two halves. One half is acted upon by the world. The other half acts upon the world. I avoid the words "passive" and "active" since we are never purely being acted upon (this would be annihilation) nor ever purely acting (this would be transcendence); each is always present in the other. The underlying structure of the world that acts upon us follows the laws of quantum systems - it is quantum. Likewise, our action upon the world is subject to these same laws; that is to say, whenever I act, I act "into" or "upon" a quantum world.

When we consider the category of super-human action - whether such a being is acted upon, or acting - we quickly run into the cognitive problem of imagination. By definition, I cannot imagine something that only a being greater than myself can imagine. Yet, I have no reason to rule out the existence of such a being. Thus, I have no reason to suppose that I am not existing in the context of the action of such a being or beings. In short, I cannot rule out that I am an ant in some celestial ant-farm. But if I am, this ant-farm is remarkably well-ordered. This judgment is not prejudiced by the preconditions of my complex brain, either, because I can plainly see the correspondence between the simplest imaginable formalisms and the regularities of the world. While I may not be able to understand a being greater than myself, I can understand things that are simpler than myself (e.g. triangles, magnets, and so on), and the world around me is filled to the brim with such simple - and beautifully ordered - things.

Thus, even if I suppose that I am in a celestial ant-farm, I can meaningfully investigate the laws of this ant-farm as far as the limits of my imagination. And when I do, I find that the laws are the laws of quantum systems. The more recent interpretations of quantum mechanics that avoid the paradoxes of older interpretations all center around one hypothesis (some closer than others): the Simulation Hypothesis. When you look around you, what you are seeing is the inside of a quantum computation (what we are calling the Quantum Monad). But if the Universe is indistinguishable from a quantum computer (as Seth Lloyd says), then the question immediately arises: "What is it computing??" What is the point of the simulation? If we are ants in a simulated quantum ant-farm, what are the builders of the ant-farm computing?

One answer to this question is to simply throw up our hands and say, "Such a being must be greater than ourselves, so there is no way we can answer this question." It's just as well to assume that the Universe is random, computing nothing at all, or evolutionary, computing "whatever survives". But this answer is unsatisfactory because our world is tantalizingly comprehensible at levels far above the wave equation. The progress of human history can be interpreted as a series of accidents but it does not have to be interpreted this way. If we suppose that the ant-farm's builders were not completely indifferent to us (perhaps by virtue of the shared light of conscious awareness and the shared reality of action), then there is good reason to suppose that they have "let down a ladder", so to speak. The possibility of the existence of such a ladder makes the question of teleology far more important than navel-gazing over the possibility of the existence of beings incomprehensibly greater than ourselves.

The Omega-based Kardashev scale we encountered earlier when discussing virtualization is a perfect candidate for this teleological ladder. The key to finding a teleological ladder, if it exists, resides in taking the idea seriously. In other words, I have to look for the rungs or I will never be able to climb. But looking for the rungs presupposes the existence of a ladder. So, it's a catch-22 and the only way to break out of it is to decide whether you think there's a ladder there, or not. Yes, I am talking about faith[1]. If we take this idea of a teleological ladder to the limit, we arrive at the idea of the Primum Mobile. The PM is the "closure" of the Quantum Monad. Instead of thinking of the architecture of the world as being something that is itself in flux, the PM arises from thinking of the architecture of the world - including the teleological ladder - as a fully solved equation. My presence here is also part of that solved equation. Perhaps the cognitive dissonance of our present state of existence arises from thinking of ourselves as being separated from the PM instead of being an integral part of the PM.

In the next post, we will look at the final feature of the Quantum Monad theory: the Logos. We will show how the Logos is the closure of the Quantum Monad and how it gives rise to an alternative to the Simulation Hypothesis that I call the Architecture Hypothesis.

Next: Part 22, The Logos

---

1. I use the "5-year-old test" to distinguish between "Illuminist" attitudes and genuine faith. A greater being than myself would not be interested in self-gratifying back-patting over how much cleverer I am than my fellows, so faith is not about believing a quantum simulation theory, it is about plain old child-like faith.

Monday, October 2, 2017

Notes on a Cosmology - Part 20, The Five Ways

At this point, I want to broach the subject of theology. This subject is often treated as lying outside of philosophy - and, thus, cosmology - but this is a purely modern prejudice. The topic of theology is basically inevitable because we are dealing with so many topics that lie on the beaches of the ocean of the divine: unending life, unlimited mind, boundless virtual creations, and so on. Avoiding the topic in order to conform to modern ideas of rigorous thinking does not conform to the ground rules for building a toolbox for thinking that we developed earlier. So, let's turn to one of the great theologians of history, Thomas Aquinas, and consider his Five Ways of proving the existence of God.

As Wikipedia notes, Aquinas's arguments are primarily types of cosmological argument for the existence of God. Cosmological arguments start from some set of facts about our world and argue that God (as defined thus-and-so) must exist, otherwise, these facts could not possibly be as they are. Let's look at his second argument,

In the world we can see that things are caused. But it is not possible for something to be the cause of itself, because this would entail that it exists prior to itself, which is a contradiction. If that by which it is caused is itself caused, then it too must have a cause. But this cannot be an infinitely long chain, so therefore there must be a cause which is not itself caused by anything further. This everyone understands to be God.

Modern philosophers can shoot many holes in Aquinas's argument. Mathematicians utilize backwards infinite regressions with ease (e.g. Zorn's lemma), so it is not obvious that a causal chain cannot be infinitely long. If we can see that things are caused, and God is a thing (existing), then how is it that God is not caused? And so on. Nevertheless, I assert that Aquinas is on to something. Here are the five conclusions of each of Aquinas's arguments:

  • There must be something that causes change without itself changing
  • There must be a cause which is not itself caused by anything further
  • There must be something that is imperishable: a necessary being
  • There is something which is goodness itself
  • The behavior of non-intelligent things must be set by something else, and by implication something that must be intelligent

I want to modernize Aquinas's arguments a little bit.

I'll start with the third argument. My mind (conscious awareness) exists, that is, it has presence. I am here. The world that I am aware of (what exists outside of me) seems to be perishable, transient. Yet, what a remarkable coincidence it is that my mind can comprehend the world in which it is supposed to have arisen. In principle, my mind can comprehend every last detail of the physical world, even if it requires me to enlist the aid of computational systems (which my mind can also comprehend) to sort out all the complexities of the physical world. Thus, the appearance that my mind is an impermanent artifact of the world around me must be a mistake. Thus, my mind is imperishable - I am an immortal soul. And if I am an immortal soul, having a beginning, there must be some greater soul which is immortal and has no beginning. This being must exist imperishably, that is, its existence must be necessary.

The first and second arguments treat causality. Let us suppose that there is an infinite chain of causality. At some point, the present consequences of some very remote cause will become very tiny, no matter how great its effects originally were (we know this from the second law of thermodynamics). There is some point at which causes become indifferent to any consciousness. Thus, the existence of consciousness is compatible with an infinitely long chain of causality in which the effects of causes dissipate over time until they are negligible. Yet, there must be some final consciousness that sets the final limit of this indifference. This consciousness must itself be free of the effects of thermodynamics, that is, it must be immortal.

The last two arguments treat consciousness or, at least, aspects of consciousness. The fourth argument treats judgment between varying degrees of goodness. In modern terms, we can view this argument as being connected to the nature of measurement - everything in the world can be measured by something else but there must be some final limit of measurement which itself exists beyond measurement. The fifth argument treats intelligence. The behavior of non-intelligent (or non-conscious) bodies must be following some originary principle or law. There must be some intelligence that is pure will, that is, free of any constraints upon its action, and this intelligence must impose its will upon all other things beside itself.

Anselm defined God as "that being than which none greater can be conceived." This is a useful thinking tool. We can use this definition to derive certain facts that must be true of God if He exists. For example, God must be choosing His own highest end at all times because a being that does not choose its own highest end would be inferior to one that does always choose its own highest end.

If God exists, we know that God must be able to create conscious beings because we are conscious. But, by Anselm's definition, God must be able to create unconscious beings, as well. Let us consider for a moment the idea of a world devoid of any conscious being. In what sense can such a world be said to exist? No one supposes that dreams are actual occurrences or involve real people and dreaming involves at least some level of consciousness. But a world completely devoid of something to be conscious of that world simply has no being in any sense. Thus, a world that has existed but has never been observed is indistinguishable from a world which has never existed at all.

The question then arises: how is it that we define the primacy of existence in the material substrate rather than in the mind? I believe there are two main reasons for this tendency. First, we never experience consciousness apart from the body. Second, when we awaken from sleep, we return to a world that has been existing apart from our own consciousness. These two coincidences reinforce the belief that the mind is derived from the body and that the world can (and does) exist without the mind.

However, I assert that this is a mistake of reasoning. In the first case, no one has ever proved that the mind cannot be conscious apart from the body. But the same is not true of the body. The body, as a fact of its nature, cannot live without the mind. The body will die if it is not fed, watered, clothed, and so on, and it is only the mind that is able to make long-range plans to meet these needs. When I dream, I might assume that I am "in my head" but, in reality, I cannot be certain about where I am or even if I am anywhere at all. When I awake, I assume that the world around me has been "ticking away" in the absence of my consciousness but, in reality, I cannot have any certainty about how much time has passed or even whether I have awakened into the exact same body that I fell asleep in.

I have introduced the scholastic theologians Aquinas and Anselm in order to talk about God. I have raised the topic of God in order to talk about consciousness and non-consciousness. And I have raised this topic in order to make the following thesis: there are no dark worlds. This assertion may seem to be out of left field. After all, nobody is actively asserting that there are dark worlds. But the idea of dark worlds is implicit in most modern cosmologies ("the material world before consciousness") and has important consequences to how we think about the nature of consciousness itself.

Because there are no dark worlds, we can easily see that consciousness is the primary thing. What it means for something to exist is for a conscious being to be aware of its existence, whether as a thought or feeling (internal consciousness) or as a sensation (external consciousness). When I am asleep and not dreaming, I am not. Thus, I can make no hypothesis about the material world in my absence.[1]

The dimension of consciousness which we have so far neglected is choice. Choice - or will - is the key to understanding the nature of conscious existence. We can resolve the problem of determinism with an idea that I will call the limit of God's indifference. Consider a young toddler playing in the house. This youngster certainly has choice, agency. When he wants something, he reaches for it. When he wants attention from his parents, he vocalizes. And so on. In the limit, however, his parents have deterministic control over every aspect of his life. They can introduce new objects into his world and they can make currently existing objects disappear. They can limit his movements around the house with child gates. Most of the time, they keep him from going outdoors or they keep him in a fenced play area, and so on. Despite the control that the parents have, the actions that they leave available to the child are in the limit of their indifference. The child is equally welcome to play with the red toy or the blue toy and it is only the wellspring of action within the child himself (his free will) that determines which toy he will choose to play with.

In the same way, we can give a rational account of compatibilism by drawing an imaginary limit of divine indifference. God sets certain limits that we cannot get past, without qualification. But within these limits, He is indifferent to our choices and we have free will, in the ordinary sense of the word.

But in order for us to be liable to God's moral judgment, our free will must partake in the same essence as His own, even though we are under divine restrictions. At this point, the organization of the Quantum Monad that we gave before should become clearer:

My will, no less than God's, superposes upon the causal structure which superposes upon time, which superposes upon the material creation. Thus, what the world is, at its deepest structure, is pure choice. The awareness of constraint - physical laws, bodily functions, social laws, etc. - tends to invert this pyramid and put the material limits at the top, with time and causality below, and my poor, helpless will at the very bottom, giving rise to the feeling of cosmic helplessness, particularly in the face of the inevitability of death.

In upcoming posts, I plan to delve deeper into the consequences of the theological conception of the Universe as pure, conscious choice and how this fits into the cosmological theory we have built up to this point.

Next: Part 21, The Primum Mobile

---

[1] This is an intentional overstatement that I plan to refine in a future post

Wave-Particle Duality Because Why?

We know from experimental observation that particles and waves are fundamentally interchangeable and that the most basic building-blocks of ...