You have ten years to live.
A pill exists that has a 99% chance of doubling your remaining lifespan and a 1% chance of instantly killing you.
Would you take the pill?
When would you take the pill?
How many times would you take the pill?
According to expected utility theory, more life is always better. Thus, given a 99% chance of gaining 10 years, weighed against a 1% chance of losing 10 years, it would appear that it would be better to take the pill (as
(10)(.99)-(10)(.01) > 0).
However, once you take the first pill (assuming you don't fall for the Gambler's Fallacy), the expected utility is be
(20)(.99) - (20)(.01) > 0, with taking the next pill as the better choice.
Ad infinitum (or until you land in the 1% and die).
(The graph is generalized from data obtained from scientific studies; it is not necessarily the best way of evaluating utility, but it is how most people evaluate it.)
Of course, this deals with utility rather than actual value. The actual value is the number of years gained or lost, whereas the utility is how much you get out of the years. Looking at the graph, gaining 20 years of life does not feel twice as good as gaining 10 years of life (especially if you get to more extreme ages, where quality of life declines, but even without accounting for quality of life, a one-time gain of 20 years feels less than twice as good as a 10 year gain.)
The main difference between expected utility theory and Prospect theory is the inclusion of loss aversion. Loss aversion requires a reference point, which allows the calculations of gains and losses.
Taking loss aversion into account, you would continue to take pills until
(gain)(.99) â‰¤ (loss)(.01) (where the gain and loss are utilities which are a function of years. Individual functions vary, so I cannot give a more precise mathematical explanation).
(Expected utility theory fails spectacularly in this case, as it recommends always taking favourable bets, even when eventually you will lose everything. But then again, it implies that the pill can give an infinite amount of life, which we are assuming in this situation.)
But this only gives the answer to taking pills all at the same time, at the very beginning. How about the best time to take the pills?
According to prospect theory, as the utility function is concave, and thus, the first quantum of benefit gives the most utility (per quanta). However, the greatest lost per quanta appears on the graph to be the first quantum as well. Thus, the best time to take the pill would be at the greatest ratio between expected utility gains and losses. (Again, utility functions may vary; a general rule does not exist.)
However, there are a few nuances in changing the time. Taking the pill at the very beginning, the value is
(10)(.99)-(10)(.01). If you take the pill with only one year of your life remaining, the value becomes
(1)(.99)-(1)(.01). In terms of maximizing value, take the pill as early as possible, reaffirming the calculations made prior. Applying the utility function to the calculated values allow you to determine the optimal time to take each pill (given that you have the utility function, and it is invariant over time).
All this theorizing is fine and all, but how many pills would you take? I haven't worked out my own utility function yet, but, from my line of reasoning of the negative utility of death, I would take a large quantity of the pills all at once. 3319 pills would allow you to outlive the heat death of the universe (a 3.26Ã—10-13% chance of surviving for 101000 years), but the heat death is a bit far (and boring). A closer target would be the death of all the stars in the universe, which only requires 44 pills (64.2% chance of living 1014 years).
1014 years should be enough for anyone. Especially when everyone is gone, the sun is dark, and Earth is merely a hunk of dead matter floating freely through space, bound forevermore to the dance of gravity.