Sunday, December 24, 2017

Notes on a Cosmology - Part 23, The Trinity

Traditional Orthodox icon depicting Father (R), Son (L) and Holy Spirit (Top)

The Trinity and the Logos

"The Logos became flesh and made his dwelling among us. We have seen his glory, the glory of the one and only Son, who came from the Father, full of grace and truth." (John 1:14)
In the last post, we discussed the Logos as the expression of the final purpose of God. The Logos is what God is seeking, it is His ultimate aim or end. The opening of the book of John draws a clear parallel between God's creation of the world (in Genesis) and the appearance of the Logos in history. The New Testament explains that the world was created by, through and for the Son. This is not an accident. Note that God created the world by a speech act. "And God said, 'Let there be light'..." God speaks in many ways, in fact, it is not an exaggeration to say that God speaks through all things at all times (Psalm 19). Despite the apparent cacophony of the world, with its conflicting messages, God does not speak equivocally. For mortals like us, then, the question is which voice is the voice of God? The Logos! The Logos is God's final word on everything. The key observation, here, is that God does not speak through lifeless statues, rote laws or even words written on pages of paper - God speaks through a living person. He once spoke through an entire people (the Jews) but, today, he speaks through one man: Jesus Christ. Specifically, he speaks to us through the living Spirit of Jesus Christ.

It is crucial to let this idea sink in. We write messages on lifeless, material objects: paper, plaques, banners, posters, displays, and so on. We write stories and convey them through through print, film and acting. God writes messages on people and their lives, as such. This is the full meaning of God's sovereignty.
“Shall what is formed say to the one who formed it, ‘Why did you make me like this?’” Does not the potter have the right to make out of the same lump of clay some pottery for special purposes and some for common use? (Romans 9:20,21)

The Trinity and Knowledge

We can divide the ways of thinking about the world into two approaches. The first approach is to see the world as a single substance that manifests a wide variety of particularities. The second approach is to see the world as a (potentially unlimited) number of irreducible particulars upon which we impose a subjective sense of unity (conscious awareness). Because these two approaches do not appear to have any resolution, this is sometimes called the problem of the one and the many.

The Trinity is the solution to the problem of the one and the many. The light of consciousness within us is lit from the one flame of the Spirit (Genesis 2:7), yet all the varieties of particular knowledge proceed from the mind of God (Psalm 139:17-18, I Cor. 2:10). The universal is God speaking to us from within - the particular is God speaking to us from without. This fact is why separation from God is torment - it is a denial of the fact of our relation to God, a relation so intimate that it is impossible to correctly interpret any fact of reality without reference to this relation.


The Trinity and Being

Being is identity. Identity is mutually exclusive between "self" and "other." Thus, the category of being is logically dependent on that which is not-being. But if God is being, and all that he creates is an extension of himself, there is nothing besides God himself and, thus, God does not exist, that is, God cannot distinguish his own existence from non-existence.

Father and Son, together with the shared being of the Holy Spirit, exist each as "self" and "other" to one another. While God's being is, of course, one, this being must be first understood in its logical relation. And this is why we speak of God being both one and three. God's oneness and threeness is not an arithmetical fact about God. It is a logical fact that is prior to all other logical facts. Without God's oneness and threeness, there is no physics, there is no mathematics, there is no philosophy, there is no being of any kind.

Every part of God's being is existing but because Father and Son are distinct, each can point to "other" and this otherness can fulfill the role of non-being (death, destruction, desolation). In this, we see God's knowledge of both good and evil and the susceptibility of the perfectible Creation to falling. God is knowing the susceptibility of the Creation to the Fall because God is able to conceive of death without being destroyed. No created being in the perfectible state can have this knowledge because they cannot point to non-being, to something outside of themselves. Rather, their existence, being created, is logically dependent upon the unconditional existence of God.


The Trinity and Action

If God is a peerless, omnipotent being, he is alone. Thus, he would not be that being than which none greater can be conceived (Anselm) and he would not be God. Another way to put it is this: suppose God wanted to play chess. With whom would he play?

If God is peerless, he cannot act, that is, he cannot choose. The triune relationship between Father and Son - with the Holy Spirit as the shared basis of being - is the only solution to this problem. God can act because he is not alone. The Father is eternally existing with his Son. The Son is eternally existing with his Father. Both share, between them, the Holy Spirit who is coequal with Father and Son.


The Trinity and Substance

The gross features of the material world are defined by the property of mutual exclusion of objects in space. We call this property substance or matter. "Two objects cannot occupy the same space at the same time." Logically, this is a category of mutual exclusion.

Consider our state-of-being from the point-of-view of a heavenly being. Time and space are really an obstacle to action. I must travel to my destination (this takes time). I must labor to provide for the satisfaction of my wants (this takes energy). I must arrange my possessions in a way that makes them accessible (this takes space). And so on and so forth. The mind can easily conceive of the absence of these obstacles that frustrate action so that every state-of-being is immediately actualized by a mere act of will. God's state-of-being cannot be less perfect than what the human mind can easily conceive.

So, then, why is there a material world at all?

Mutual exclusion is impossible without conscious choice because there is no sufficient reason for the ephemeral objects of pure contemplation not to overlap. A non-overlapping geometry requires the application of collision-detection algorithms - such algorithms are necessarily more complex (less probable) than those which permit arbitrary overlap. The material world of substance could not exist without the conscious choice to engage in non-overlap for the purpose of interaction. The material world is substantial by agreement. In short, the material world exists, as such, by agreement between Father, Son and Holy Spirit. Their purpose in the material creation is precisely what is explained in Scripture: to create man in the image of God in a perfectible state and to provide the Redeemer who has delivered fallen man from death.


The Trinity and Causality

Descartes contemplated the possibility that we are deceived by a malicious being of unimaginable power. The famous "cogito ergo sum" was the result of this thought-experiment.

Causality-as-such is very difficult to establish on any kind of formal basis. Suppose you roll a pair of dice and you wish to know whether the outcomes of these rolls are random or whether they are being influenced by an evil trickster. You can count the outcomes of rolls and you can compare them with the probabilities of outcomes to see if they match up. For example, a pair of dice that consistently roll 7, 7, 7, 7, ... are not random and are not behaving the way we expect dice to behave, on the basis of causality. If the dice behaved this way, you could be quite sure that something was amiss.

But no matter how well-behaved the dice seem to be, you can never really know - even if you employ mathematical methods with uncomputable time! - that the results of dice throws are not being influenced somewhere by an unimaginably greater being. Even if the dice checked out - after uncomputable time - as "true random", you cannot conclude from this investigation that the next roll of the dice will be unbiased. Our sinister Cartesian demon could simply have been waiting for you to check the dice - and believe they are unbiased - before playing shenanigans with them. Stated another way, it is always possible to prove a phenomenon to be non-random, but it is never possible to prove a phenomenon to be random. Scientific claims about quantum randomness, for example, are not rigorous because no one could ever distinguish, by experiment or computation or both, a truly random source from a merely apparently random source.

Because of this, a universal conspiracy - that our senses are feeding us an incorrect picture of "the real reality", for example - cannot be ruled out. Conspiracy is the opposite of randomness; random dice do not conspire to make you lose or win at the betting table. But since we can never prove a phenomenon to be truly random - as opposed to apparently random - we can never actually rule out the possibility of universal conspiracy.

Like substance, causality is a limiting factor. Regret is only possible under the constraint of causality. A being than which none greater can be conceived has no regrets, by definition. Such a being has no use for causality and cannot be shackled by any law of causality. Causality operates for the same reason that substance exists: by agreement between Father, Son and Holy Spirit in order to create, to redeem the creation and to glorify the Father through the Son.


The Trinity and Transcendence

Contemplating the infinitude of God can leave the mind lost in a boundless ocean of limitlessness. This state of mind is equivocal and is incompatible with the presence of conscious awareness. The presence of the being of God is a result of the focus of God's mind on the Logos - His purpose for being and existence.

God's mind, conceived as an infinite, uncentered consciousness, would be necessarily equivocal. Every point of being would be equal to every other point of being. In other words, being conscious in a state of utter torment would be equivocal with being conscious in a state of utter bliss. This contradicts our working definition of God as the greatest conceivable being.

The Father's mind is univocally focused on the single end of raising the name of his Son above every other name and causing all things in heaven and earth to bow and acknowledge his Son as Lord. Thereby, the Father glorifies himself (Phil. 2:9-11). This is how it is that the Father causes that not one sheep will be lost (John 6:39). To borrow a mathematical metaphor, the mind of God is like an infinite plane with a single point that is the designated center of the plane - that point is the Logos.


Conclusion

God is light. This light is not physical light - rather, physical light is a material metaphor of the Divine light. The Divine light is conscious awareness, choice and, especially, holiness. The Trinity is the shared reality of light in God: God's awareness of all that is, God's power to choose as he sees fit, God's dispersal of all darkness that opposes his light, God's absolute perfection - a perfect creator, Father, redeemer and ruler. It is the light of God that best illustrates God's oneness without becoming entangled in irrelevant questions of arithmetic - the light of the Father, the light of the Son and the light of the Spirit are one. They are one being, with one mind, one will, one purpose and one, divine perfection.

This series began as an inquiry into the causes and conditions of existence - a cosmology. As we near the end of the series, I want to underscore that any conception of existence that is informed solely from the particulars of material world-states is woefully inadequate. Your life is a story that God is telling you. His redemption is the focal-point of that story. You cannot live apart from God's redemption, that is, you will die without redemption. The common idea of a tension between faith and reason, or between faith and science, is a wholly mistaken notion. God is not a story that we are telling each other -- we are a story that God is telling us!

Next: Part 24a, Epilogue

Friday, December 22, 2017

Lossless Compression with a Lossy Compression Ratio


Lossy compression is used on rich media - audio, images, video, and so on - to great effect. JPEG image compression can drastically reduce the file size of an image without severely degrading the quality of the image. Compression ratios of 90% or more are attainable for applications where image quality is less important. This tradeoff between data quality and file size is only possible when the type of data being compressed is noise-tolerant. A compressed image of a cat may be slightly fuzzy or have slight mis-coloration but it is still recognizable as a cat, after compression. A text file, such as a legal document or a computer program, on the other hand, must be losslessly compressed because semantic content will almost certainly be destroyed by the noise in the resulting, compressed file. For these kinds of files, we use lossless compression.

It would be nice, however, if we could achieve the filesize benefits of lossy compression, without introducing noise - can we have our lossless compression cake and eat it, too? Consider the following quote:

No one rejects, dislikes or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?

- Cicero, De Finibus

As we know, the more often a particular character or word is repeated in a text, the higher the redundancy of that text and the more compressible the text is. Can we apply a transform to this text that will render it more redundant, without losing our ability to recover the text exactly?

No one rejects, dislikes ◦◦ avoids pleasure itself, because it ◦◦ pleasure, ◦◦◦ because those who do ◦◦◦ know how ◦◦ pursue pleasure rationally encounter consequences that ◦◦◦ extremely painful. Nor again ◦◦ there anyone who loves ◦◦ pursues ◦◦ desires to obtain pain of itself, because it ◦◦ pain, ◦◦◦ because occasionally circumstances occur ◦◦ which toil ◦◦◦ pain can procure him some great pleasure. To take ◦ trivial example, which of us ever undertakes laborious physical exercise, except ◦◦ obtain some advantage from ◦◦? But who has any right ◦◦ find fault with ◦ man who chooses ◦◦ enjoy ◦ pleasure that has ◦◦ annoying consequences, or one who avoids ◦ pain that produces ◦◦ resultant pleasure?

A quick glance at the text should allow you to convince yourself that an English speaker will be able to easily reconstruct the original text with few, if any, errors. Clearly, this version of the text has higher redundancy because we have replaced 46 separate characters – drawn from a subset of the English alphabet – with 46 repetitions of a single character. Thus, this version of the text admits to a better compression ratio. Is there a way that we could ensure that the person who is trying to decode this text has reconstructed the original text? The answer is to take a hash of the original text and give this to the person trying to decode the obscured text. In this case, the CRC32 of the original text is 0x77ea20bb. If the person decoding the obscured text makes a mistake, say, by choosing “yet because” instead of “but because” for the third obscured word, the resulting CRC32 will be 0xffa9da29. So, by adding a few bytes to store the hash of the original text, we can strike out words that are easy to guess from the context and an English speaker will be able to recover the original text and convince herself that the reconstructed text is identical to the original text.

We have been careful to strike out only words that are easy-to-guess for an English speaker (from context) – but do we have to stop there? The answer is no. Let’s say we strike out a large word that cannot necessarily be guessed from the context:

No one rejects, dislikes ◦◦ avoids pleasure itself, because it ◦◦ pleasure, ◦◦◦ because those who do ◦◦◦ know how ◦◦ pursue pleasure rationally encounter ◦◦◦◦◦◦◦◦◦◦◦◦ that ◦◦◦ extremely painful. Nor again ◦◦ there anyone who loves ◦◦ pursues ◦◦ desires to obtain pain of itself, because it ◦◦ pain, ◦◦◦ because occasionally circumstances occur ◦◦ which toil ◦◦◦ pain can procure him some great pleasure. To take ◦ trivial example, which of us ever undertakes laborious physical exercise, except ◦◦ obtain some advantage from ◦◦? But who has any right ◦◦ find fault with ◦ man who chooses ◦◦ enjoy ◦ pleasure that has ◦◦ annoying consequences, or one who avoids ◦ pain that produces ◦◦ resultant pleasure?

Here, we have increased the number of struck-out characters from 46 to 58. The word that has been obscured has length 12. There are thousands of possible words that could be substituted here but, in all likelihood, only one of them will satisfy the criterion that the CRC32 hash of the resulting text block be 0x77ea20bb – the word “consequences”. Thus, we could write a program to automate a word search through the dictionary until it finds the missing word, and gives us the result.

In itself, this is not very useful. After striking out just a handful of hard-to-guess words, the time required for a brute-force search would become prohibitive – it grows exponentially with each word struck out. But we have identified a method for applying a generic “guess-and-check” algorithm to the reconstruction of an arbitrary text from an obscured text with higher redundancy. Can we make this algorithm more efficient?

In the first example, it would be quite easy to train a neural network to perform a guess-and-check algorithm to emulate the guesses of an English speaker. But if we focus on trying to recreate the mind of an English speaker, we miss the wider application of the method for the purposes of compression. Let us call the person who encodes information by increasing its redundancy the obscurer (O) and the person who tries to reconstruct the original input, with the aid of its checksum, the revealer (R). O is able to strike out the easy-to-guess words because she knows that the substitutions will be easy-to-guess for R. So, as long as this property holds – that R will easily be able to guess the substitution based on the information given by O – we have a system that can losslessly encode and decode information into a form with higher redundancy than was present at the input to the system.

We can think of the obscurer and revealer as playing a game, such as a crossword-puzzle – O is trying to construct puzzles that have as much redundancy as possible for a given degree of difficulty and R is trying to reconstruct the original input, based on the puzzle, as quickly as possible. We can implement O and R as a pair of convolutional neural networks (CNN) using Monte Carlo Tree Search (MCTS) – a construct used to great effect by Deep Mind with its Alpha Go, Alpha Go Zero and Alpha Zero game software. During training, we calculate the difficulty of each encoding/obscuring choice that O makes and we train O to prefer paths that result in greater redundancy, while avoiding paths that result in excessively high difficulty for R.

For each compression context, we choose a training corpus and train O and R. We can think of this as choosing which game O and R will play with each other. An English text game is different than a music audio file game, and so on. It is common practice in modern compression utilities to automatically change compressor based on context. For this purpose, we train a feed-forward network Q to function as the meta-compressor. During compression of a file, Q switches context as appropriate, so that O is likely to be playing the game that is best suited to increasing the redundancy of the input file for a given degree of difficulty, including, in the case of random data, the pass-through game in which the source information is passed through as-is. For each game on which O and R are trained, we can vary the difficulty parameter to allow a user to choose between quick, mild compression and slower, more aggressive compression.

Note that we do not necessarily use the CRC32 hash, this was merely chosen as an academic example. The hash should be chosen based on probability considerations, that is, it should be chosen such that the probability of collision is negligibly small for the given context. The puzzle game played between O and R uses as many hashes as are suitable for guiding the guess-and-check puzzle. The problem with obscuring random words is that the guess-and-check complexity grows exponentially with each word obscured. By choosing to obscure only words that are easy-to-guess for R and only obscuring so many words before providing a puzzle hint (hash), O can limit the difficulty of R’s task.

We can think of the game being played between O and R as building and pruning a variable-order, conditional-entropy model of the input. At each branch-point in the model, the lowest entropy (most probable) branches are liable to be discarded (obscured). Every so many branches (based on the training of O and R), a puzzle hint is given by O so that R can reconstruct (reveal) the missing branches with a reasonable amount of difficulty.

The resulting, more redundant data produced by O resembles lossy compression because the obscured information is simply discarded. Any suitable, lossless compression algorithm can be applied to the output of O. If O and R are well-trained, this should result in an improved compression ratio or, at worst, no change to the compression ratio.

Friday, December 15, 2017

Fuzzy Sets and Artificial Intelligence

Many patterns of Nature are so irregular and fragmented, that, compared with Euclid — a term used in this work to denote all of standard geometry — Nature exhibits not simply a higher degree but an altogether different level of complexity … The existence of these patterns challenges us to study these forms that Euclid leaves aside as being "formless," to investigate the morphology of the "amorphous." - Benoit Mandelbrot, as quoted in a review of The Fractal Geometry of Nature by J. W. Cannon in The American Mathematical Monthly, Vol. 91, No. 9 (November 1984), p. 594
Artificial intelligence requires a new way of thinking about both Nature and computation. Alpha Zero has demonstrated a fundamentally new form of chess playing that did not exist before. Its style of play has been described as alien, resembling neither the style of human play nor the style of classical machine play.

Fuzzy set theory (or fuzzy logic) is an alternative approach to standard set theory or "crisp" set theory. With fuzzy sets, every element in the universe of discussion has a "degree of set membership" in one or more sets. The degree of membership is a value between 0 and 1 that can, under certain conditions, be interpreted as a probability (it is a mistake to treat degree of set membership as identical with probability, however).

Let's consider the category of image-recognition - not merely machine image-recognition but the category, in general (including human or other image-recognition). Let us say we have two large sets of images T and L. T contains images of tigers shot from a wide variety of distances, angles and visual conditions. L has a similarly wide selection of images of lions.

Using T and L as our ground truth, let us randomly select an image from one or the other set and submit this image to a test subject for identification. The test subject can answer "lion," "tiger" or "unknown". When the subject answers "lion" for an image drawn from T - or vice-versa - we can say that this is an error as measured against the ground truth. But sometimes the subject will not be able to make any positive identification, no matter how carefully they attempt to do so - perhaps because the image is too fuzzy or the animal is too distant or the particular angle or image conditions cause the animal's appearance to be equivocal with the appearance of the other animal. From the perspective of the ground truth, "unknown" is always an erroneous response. But this contradicts our intuition that "unknown" is a perfectly reasonable response for cases where the image information required to distinguish an element of one set from the elements of the other (on the basis of the image alone) is lacking. In such cases, "unknown" is the correct answer and an answer of "tiger" or "lion" would be flatly incorrect or, at best, a mere guess.

Instead of categorizing answers according to the ground truth, we can allow the test subject to associate some degree of confidence with the answer - [L,1.0] is "more of an element" of L than [L,0.9]. As we submit the images to the test subject for review, two new sets - L' and T' - will be formed, describing that subject's classification of the images, along with an associated confidence parameter. This approach allows us to directly express equivocation, since the test subject may answer [L,0.5],[T,0.5] in order to classify an image as being equally a member of either set - this would have been an "unknown" in our previous arrangement. But the test subject can now express degrees of equivocation, so that [L,0.6],[T,0.4] classifies an image as slightly more a member of L than of T. Of course, any item in the universe of discourse can be fully included in more than one set, so membership is not normalized to 1.0; an answer of [L,0.1],[T,0.1] may express the test subject's doubt that either animal is in the picture at all.

But now let us introduce the liger.



Ligers are a fact of physical reality (they are a real, existing hybrid). But ligers break our L/T dichotomy. Even our fuzzy sets don't help - [L,1.0],[T,1.0] should indicate the situation where a lion and (separately) a tiger are present in the image. That is, a liger - being its own hybrid - requires its own classification, let's call it G. What I am asserting is that, at the macroscopic level of observation, reality is inherently continuous and, thus, the category of categorization itself is always liable to breakage. This is the black swan theory - no matter how complete we feel our theory is, the possibility of a black swan always exists. No matter how much we rationalize having missed the possibility of a black swan (after the fact), the fact remains that we overlooked this possibility because reality is fundamentally continuous - if there are lions (L) and there are tigers (T), there is always the possibility that there are ligers (G), a fundamentally new category that does not belong to any already-known category.

The implications go down to the foundations of math itself. Modern math is based on standard set theory. This kind of set theory is ideal for symbolic reasoning, the kind that mathematicians use almost exclusively. But note that not all reasoning must be symbolic. Here is the proof of Pythagoras's theorem. I will explain it using symbols but it is not comprehended symbolically:

The image on the left shows two gray, square regions. The long sides of the triangles are labeled a (all equal), the short sides are labeled b (also all equal). To transform the left image to the right image:
  • The red triangle stays where it is
  • The blue triangle slides all the way down
  • The green triangle slides all the way to the left
  • The yellow triangle slides to the upper-right
The hypotenuses of the triangles are, obviously, equal, and labeled c. Since these are right-triangles, the angle formed by placing the long-side and short-side of the triangles abutting on the same line must be 90 degrees, since 180-90=90. Thus, the gray region in the center of the right image is a square and its area must be c2. This is a proof of Pythagoras's theorem.

There is nothing about this proof that requires the use of symbols. You could even build a physical model of this proof, if you wanted. It is even possible to perform numerical calculation without the use of symbols. Techniques involving only a straight-edge and compass easily allow numerical calculations to be performed to a handful of significant digits.

Standard set theory - and the mathematics built on it - naturally assumes noiselessness in the symbols themselves. This means that we always recognize the symbol 3 as the number it represents and never confuse it with another number, such as the number four. It also means that there are no categories that break the syntax of our formal system - there are no ligers among the symbols of mathematics (imagine a 3 and 4 superimposed, for example).

In the real world, noiselessness is never absolute, it is always a matter of degree, based on the redundancy and other error-correcting features of the chosen encoding. In the limit, we must admit that this is even true of human mathematics. Human knowledge and human memory - even with all its external, material aids - is not perfectly noiseless.

But noisy symbols are like fuzzy sets, or ligers. We live in a Universe where absolute noiselessness is simply not in the attainable set of conditions, but where our most effective theories of reasoning and material causality are built on symbols that are supposed to be made out of noiselessness. Ligers break noiseless theories.

So, what we want is a system that ligers can't break. A liger doesn't break reality, it just makes it different when we discover one. There is no reason we cannot build formal systems that act like the material world - systems that are noise-tolerant. It is still possible to reason with a quasi-consistent system of symbols - fuzzy symbols. In fact, the world just is fuzzy symbols being interpreted by the mind-body system. In short, the mathematics of reality is fuzzy mathematics.

Unlike classical computation systems, AI systems are inherently fuzzy. They are good at fuzziness, unlike their brittle forebears. But AI systems are going to face increasing headwinds as they improve at fuzziness. The human brain can easily distinguish a lion from a tiger, even at a very young age after seeing only a tiny number of examples - and perhaps entirely schematic! Yet our brain has great difficulty performing long-division on numbers more than a handful of digits in size. This is a result of the brain's fuzzy orientation - it does not expend precious mental resources on noiselessly encoding decimal digits which would enable us to perform rapid long-division in our heads. The more fuzzy AI becomes, the more it is going to face the same obstacle - when it encounters a formal problem with high logical depth, it will need to utilize an external, classical computational process to handle this problem in the same way that the human utilizes a calculator to handle such problems.

Fuzzy sets are not exactly identical with quantum mathematics but it is tempting to wonder if it is possible to naturally represent a fuzzy set theory as a quantum system. This could even establish a correspondence between the limits of algorithmic complexity - which we have explored in previous posts - and the a priori limits of physical observation that quantum theory predicts (the Planck limits).

In this view, the reason that quantum systems act more like fuzzy sets than like crisp sets would be a consequence of the limitations of the observer (us). An observer of limited complexity can only distinguish two distinct objects up to the limit of complexity. If two objects are different but this difference can only be perceived at a level of complexity beyond that possessed by the observer, they will appear to that observer to be the same. By the same token, if two objects are the same (have the same properties) but this identity can only be perceived at a level of complexity beyond that possessed by the observer, they will appear to that observer to be different. Today, this is purely pedantic speculation. But a world in which Artificial Super Intelligence exists, this will no longer be a pedantic matter. We may end up in a world where AI is able to distinguish between things that look the same to us - no matter how much scientific instrumentation we apply; and vice-versa. Between here and there, we're going to need a robust language in which to discuss fuzziness. Such a language may look as different from traditional mathematics as the above proof of the Pythagorean theorem looks different from an algebraic proof of that theorem.

Wave-Particle Duality Because Why?

We know from experimental observation that particles and waves are fundamentally interchangeable and that the most basic building-blocks of ...