Wednesday, July 12, 2017

The Logic of Life-extension

LEAF: What is the goal of life-extension?

Life-extension fundamentally alters the logic of means-end analysis. The infographic in the linked article captures the essence of this change. Living longer is not an end in itself. For example, if I was decrepit, I would not want to live longer, I would want the release of death to take me away from my decrepit state. Thus, life-extension is really about extending life in its flourishing state.

But what is it to flourish? What is eudaimonia? What is the summum bonum? These questions are all inter-linked. The many possible answers given by philosophy are of no use in any particular case. Perhaps life has no meaning. But my life has meaning to me. Perhaps there is a grand design of which I am just one, tiny cog, but the only designs that can actually matter to me are my own designs.

I don't know what my highest goal is. I can choose a goal at random (e.g. "retiring rich", "leaving the world a better place", etc.) but this doesn't mean I have actually chosen the highest goal. I have chosen goals in the past, goals which I later came to regret. In retrospect, I came to view them as objectively inferior to other goals I could have chosen but did not. Because I am capable of improving my choice-strategies, I do not want to keep repeating the mistakes of the past. This means I need some kind of objective standard by which to assess goals, a standard strong enough that my future self will look back in retrospect on the choices of my present self with lasting approval.

I don't have a crystal ball, so I can't do this by "looking into the future". I don't have any super-powers, so I can't "will" the world into being such that it suits my whims. All that I have at my disposal are my reason, my knowledge and the other aspects of my natural endowment. But this might suffice if I am not completely unlucky.

I do not have an objective standard by which to know my highest goal. But imagine I had a powerful simulator that could simulate all possible choices I might make and their long-run consequences - this would be the next best thing to knowing my highest goal because if it is possible to work it out (objectively), then I will eventually succeed, given enough time to explore the tree of all possible choices (imagine I am indefatigable). So, even without knowing my highest goal, I can know that a good proxy-highest-goal is to build a simulator that will allow me to simulate all possible choices and their long-run consequences. Thus, I have deduced my proxy-highest ends:
  1. Build a choice-simulator to simulate all my possible choices and their long-run consequences so I can search the space of all possible choices until I find the path that leads to knowledge of my highest end, so I can follow this path.
  2. Extend my healthy lifespan as much as possible so I can explore the choice-simulator as deeply as required to achieve (1)
  3. Extend my "life-stamina", that is, my will-to-be, so that I will never give up on (1) and (2)
Thus, life-extension can be seen as being next-to-the-proxy-highest end. It is next to the next-most-important-thing.

It might not be clear what I mean by "choice-simulator". By this, I really mean any process - whether technological, logical, social or otherwise - by which I can calculate very long-run consequences of present actions. For example, there are companies that build silicon fabrication facilities. These facilities can cost billions of dollars each. There are project managers who reckon the project timeline and budget - they are "choice-simulators"; they are prognosticating or engaging in manual simulation of a real process by forming reasonable, evidence-based projections of how the project is likely to unfold. The burgeoning field of AI is going to allow us to sharpen and extend this capability by many orders of magnitude in almost every field of human endeavor.



Now, it may be that I will never figure out my highest end, even given 1-3. It may be that when I find my highest goal, I regret knowing it. It may be that (1)-(3) put me in hazard of dangers that I could have otherwise avoided, dangers worse than anything I would have encountered in a non-extended life. Given that we are reasoning on the furthest horizon of imagination, it is best to dispense with any presuppositions that we do not actually need.

The first presupposition we will dispense with is autonomy. I suppose that I am autonomous and this works well enough for my day-to-life. But do I actually know that I am autonomous, in the way that I know that the Pythagorean Theorem is the case? I do not. Thus, it is possible that (1)-(3) are part of a larger determinism. If so, it is worth considering whether this larger determinism is benevolent or malignant. It is also worth considering how effort is best directed in either scenario (deterministic versus autonomous).

The second presupposition we will dispense with is the atelic supposition - do I know that there is no telic force that is interested in my development, specifically in respect to my knowing my highest end? I do not. Thus, it is possible that (1)-(3) are part of a telos that is beyond my present understanding. As with the last presupposition, it is worth considering whether this greater telos is benevolent or malignant. It is also worth considering how effort is best directed in either scenario (telic versus atelic). I plan to extend these thoughts in upcoming posts.

No comments:

Post a Comment

Wave-Particle Duality Because Why?

We know from experimental observation that particles and waves are fundamentally interchangeable and that the most basic building-blocks of ...