Scott Alexander of SlateStarCodex / AstralCodexTen recently wrote Pascalian Medicine, in which he looks at various substances purported to improve covid outcomes, but which have relatively low amounts of evidence in their favor, likening administration of all of them to patients to a Pascal’s wager-type argument: if there is a small probability of a potential treatment helping with covid, and if it’s also very unlikely that this treatment is harmful, should we just give it to the patient regardless of if the quality of evidence is low and uncertain, as it would clearly have a positive expected outcome regardless?
The naive answer to this could simply be to attempt to calculate an expected value (note: I use the term expected value often here, but in some cases the terms hazard ratio, relative risk, or odds ratio would be more appropriate) for each treatment, and administer it if it’s positive. But there could be some unintended consequences of using this methodology over the entire set of potential treatments: we could end up suggesting treatments of 10 or 100+ pills for conditions, and apart from something just feeling off about this, it could magnify potential drug interactions, some treatments could oppose others directly, the financial cost could start to become prohibitive, and it could decrease patient confidence and have many other undesirable second-order effects.
There are many counter-arguments presented to the above concept which become less salient when the goal is changed from ‘find drug treatments to prescribe to all covid patients’ to ‘find personal health interventions that increase your own lifespan/longevity’.
I am fortunate enough that I am able to evaluate potential longevity interventions myself, pay for them myself, administer them myself, and review their potential effects on me myself. I might not do a perfect job of this – research is difficult, time-consuming, and lacking in rigor and quantity, and finding appropriate longevity biomarkers to quantitatively asses the effects of interventions is also difficult. But uncertainty is a given here, and that is why we incorporate it into our frameworks when deciding if something is worth doing or not by calculating an expected value. Furthermore, any harm that I may accidentally incur will only be done to myself, reducing the ethical qualms of this framework to near-zero (I would strongly oppose arguments that I should not have the right to take drugs which I think may significantly improve my own health, although some may disagree here).
My modus operandi with respect to longevity may have many uncertainties in its output, but still operates with a very strong (in my opinion) positive expected value: If a substance significantly and consistently increases the lifespan of organisms similar to humans (ideally in humans), and is also very safe in humans, then it is something that I want to take
This is how I operate personally with longevity, and it does result in me taking quite a few things (currently I’m at around 15). I do still try to minimize what I take as a meta-principle (for example, setting a minimum threshold of expected value that a substance must provide to warrant inclusion, rather than simply accepting any positive expected value) for a few reasons: firstly, to reduce potential drug interactions (which we do attempt to asses on a per-substance basis, rather than account for as an unknown, but unknowns are unfortunately a very large component of messing with biology regardless). Secondly, to keep my costs relatively sane, although I am not too worried about this as there are few ways to spend money more effectively than on trying to improve your health. Thirdly, to reduce the occurrence of interventions that may have the same or opposing mechanisms of action (taking two things with the same mechanism of action may be okay, but sometimes dose-response curves are less favorable, and taking >~2x of something will result in diminished or even negative returns). Lastly, to minimize potential secondary side-effects that could be cumulative over large classes of substances (for example, effects on the liver).
I don’t intend to promote any specific substances or interventions here as I don’t give medical advice, nor do I want anything specific to be the focus of this post, but I do want to remind us that just as we can calculate expected values in a utilitarian fashion and get effective altruism as a result, we can do the same for longevity interventions and get a very strong chance at notably increasing our lifespan/healthspan as a result. I do have a list of some of what I take here, but it is definitely not intended to promote anything specific to others.
Why Not?: Potential counter-arguments
Algernon’s Law is sometimes brought up, suggesting that evolution has already put a lot of effort into optimizing our body, and thus we are unlikely to find improvements easily. But, as Gwern notes in the above link, there’s at least three potential ways around this reasoning: interventions may be complex (and/or too far away in the evolutionary plane) and could not have easily been found, they may be minor or only work in some individuals, or they may have a large trade-off involved and cause harm to reproductive fitness.
Although some areas of future longevity treatments may fall under exception one and be complex enough that evolution could not have found them, I would suggest that the majority of today’s potential treatments fall under exception three: evolution optimizes for reproductive fitness, not for longevity, and for this reason there are many interventions which will improve our longevity that it has not given to us already (this is part of why I am more optimistic about longevity interventions than I am about intelligence interventions/nootropics).
For an extreme example of this, it has been noted that castrated males often live longer, and that this is obviously something evolution would not be very interested in exploring. Although this has been found with median lifespan in male mice (maybe in females too?), there is also purported historical data on Korean eunuchs suggesting that they may have lived a full 14-19 years longer (there are definitely potential confounding variables and/or bad data here, but we don’t have RCTs on this in humans for obvious reasons..), and a more recent study in sheep that is also highly relevant: Castration delays epigenetic aging and feminizes DNA methylation at androgen-regulated loci, where epigenetic aging clocks that look at DNA methylation are used in castrated sheep. There are other traits that seem to improve longevity as well, for example decreased height. It seems quite plausible that there are a lot of trade-offs that optimize for strong reproductive fitness early in the lifespan of organisms, which end up costing the organism dearly in terms of longevity. These trade-offs may be involved in many areas such as testosterone, estrogen, growth hormone, IGF-1, caloric restriction, mtor activation, and many others.
Large error in estimating unknown risks
One other counter-argument here is often along the lines of “you are messing with things you don’t understand, and you could be hurting yourself but be unaware of this; the damage may also be difficult to notice, or perhaps only become noticeable at a much later time”
It is true that our understanding of biology is lacking, and therefore also that we are operating in highly uncertain environments. I would be open to evidence that suggests reasoning for why we may be systemically underestimating the unknown risks of longevity interventions, but given how strong the potential upside is, these would have to be some pretty terrible mistakes that are being made. It is often noted how curing cancer may only extend human lifespan by a few years, whereas a longevity improvement of 5% for everyone would provide much more value (and is also much easier to find in my opinion). One could make an argument here that even if I was doing something that notably increased my risk of e.g. cancer, if the expected lifespan increase of this intervention was as much as 1-5%, this could still be a huge net positive for my health! I don’t take approaches that are this extreme regardless, and I try to keep the risk side of my risk/reward ratio low independently of the level of potential reward in attempt to account for this uncertainty. I am also not aware of many interventions that seem to have very high numbers in both the numerator and denominator here, although I am pretty certain that they do exist; I don’t currently take anything that I think has notably detrimental side-effects for the time being.
Is it fair to call this approach Pascallian?
The original nature of Pascal’s wager is that of extreme probabilities resulting in positive expected values, but the numbers that we are operating with are nowhere near as extreme as they could be. It is probably not a good idea to take 10,000 supplements, each of which have a 0.1% chance of extending your lifespan by a year for many reasons (similarly, if 10,000 people that claimed to be God all offered me immortality for a small fee, I would hope to decline all of their offers unless sufficient evidence was provided by one).
As I’m not arguing in favor of taking hundreds or thousands of supplements in the hopes that I strike gold with a few of them, it may be worth noting that ‘Pascallian Longevity’ would be a poor label for my strategy. Regardless, taking just 5-10 longevity interventions with a strong upside potential seems to be significantly more than almost everyone is doing already, so I still stand by my claim that there are many free lunches (free banquets, if you ask me) in this area, and I am very optimistic about the types of longevity interventions we’ll find in the coming decades.
Open to any corrections/comments on Twitter or any medium on my about page