Viewers of the television show Family Guy may recall a particularly risqué episode in which the buffoonish protagonist Peter Griffin is approached by a con artist claiming to sell volcano insurance. The con easily works on Peter, despite the apparent absence of volcanoes from their residence in Quahog, Rhode Island. He promptly trades his family’s savings for the illusory security of the con artists dummied-up insurance certificate. The routine is worth a laugh and sets up a comedy of errors that plays out through the episode. You may find yourself chuckling, ‘volcano insurance, who could be so stupid?’
That’s exactly what most European airlines thought until the spring of 2010. That year a massive volcanic eruption in Iceland threw tens of thousands of pounds of dust and ash into the atmosphere, grounding most European flights for days and costing the airlines hundreds of millions of dollars. Ordinarily, the airlines insurance policy would have provided protection for the loss, as routinely happens when flights are canceled due to inclement weather or mechanical issues. However, because the airlines’ insurance policy didn’t cover volcanic eruptions, the insurance companies (not unreasonably) noted that the airlines weren’t in a position to file a claim. The cloud of ash spreading out from the hilariously unpronounceable Eyjafjallajökull thus became a bigger financial disaster for European airlines than 9/11 had been. Who seems dumber now, the European airlines or Peter Griffin?
Humans have unsurprisingly poor means of gauging the future. In other words, we’re futuredumb. We routinely exaggerating worse-case scenarios, while sometimes real threats go unnoticed until they reach catastrophic proportions. Oftentimes we find ourselves hastily improvising actions to on-rushing circumstances, constantly reacting rather than preparing. The upside of being futuredumb is that we get to think of ourselves as presentsmart, making ourselves most immediately comfortable (even and especially via engaging in activities deleterious to our futureselves). Thus, the volcano insurance joke really gets a lot of laughs. You’d trade how much short-term satisfaction for long-term security? What an idiot! This is antagonism of preferences between present and future selves is intrinsic to many elements of our everyday decisions, from eating fast food to procrastinating. And yet, our still-too-human political stewards and captains of industry are expected to exist in timescales alien to our experience of phenomenal existence! How is it that, in spite of it all, we somehow imagine the future? It is a necessary element of continuous selfhood, that we endure (remain ipseic through duration), and are thus confronted with the tricky task of fitting ourselves chronological prostheses, the Tralfamadorian repetition hurtling towards the unknown.
The conceptual tools that we use to gauge the future are sometimes called heuristics, from the Greek heuriskein meaning to find. This meaning harkens back to Meno’s paradox of knowledge; how can you find knowledge unless you already know what you’re looking for, and if you know what you’re looking for then haven’t you already found it? This paradox applies especially well to the possibility of knowledge of the future, since repeated studies have confirmed that we tend anchor our expectations in our social experiences. This is true even if those experiences are materially or causally unrelated to the proposition in question. That’s many expert predictions are little better than dart-throwing monkeys.
However, there’s little doubt that heuristics are necessary and we’d be lost without them. The investment banking maxim ‘past performance is no guarantee of future returns’ is more of a playful epistemic prank than an everyday cognitive dashboard. To live with such uncertainty would be far more difficult. We would find ourselves cast adrift in a sea of unmoored future-presents forced to constantly re-assess even our most mundane assumptions. To this homo skeptomai even tomorrow’s sunrise could only be considered a probability, though she has seen it rise every morning prior. This world of active uncertainty is profoundly paralyzing, as recurrent questions of accuracy and detail plunge us into paroxysms of self-doubt. What to some is called ‘critical reflection’ to others reads as ‘second-guessing’. Ultimately, we have to make decisions and regardless of our decisions we must choose whether to act or not. There is no such thing as refusing a decision; the refusal itself is a decision that implicitly rewards a particular outcome and set of expectations.
To buy volcano insurance, or to not buy volcano insurance? How can we avoid playing the idiot? The decision itself is not isolated, but occurs in a continuum of trade-offs and opportunity cost. Just like in the Family Guy, the volcano insurance comes at the cost of our family’s “Rainy Day” money. That jar of bills and change represents a fungible (albeit limited) form of protection against negative uncertainties, since it can be used to fund response measures in a variety of different contexts. And yet, it would be no match for the massive damage of an actual volcano striking our nth-mortgaged Quahog residence. That’s where having a significant pay-out would really come in handy. It’s the lesson of European airlines all over again. Of course, insurance usually only becomes a good investment in hindsight.
On the other hand, if we decide to put the “Rainy Day” fund into volcano insurance, what other risks have we committed ourselves to addressing? There are surely other catastrophes equally worthy of insuring ourselves against. Suddenly, it’s not just the “Rainy Day” fund getting overdrawn. In our hyper-sensitive awareness of future threats we barely notice notice that the costs of total security have bankrupted us. How much of the present need be financially sacrificed on the altar of an uncertain future? Not being futuredumb costs a lot, and starts to seem increasingly presentdumb. I recently had a discussion with a friend about the possibility of “insurance insurance”, or the creation of a secondary market to insure against the bankruptcy of insurance companies in the event of multiple simultaneous catastrophes. Of course, why not have “insurance insurance insurance” in case the “insurance insurance” companies fail? There needs to be a limiting principle. Otherwise, how far will the rabbit-hole of our apocalyptic imaginations take us?
You can go broke through preventative measures, but you’d be the exception. It’s far more common to get stung rolling the dice. Our worse-case nightmares turn out to be less likely than the icebergs we don’t see coming. Returning to Meno’s paradox, the fact that we have predicted a human catastrophe with certainty actually suggest that it might be less likely to happen than if we hadn’t. In other words, if we accept that the future is constitutively uncertain, then certainty itself becomes unlikely. The reverse seems to be true as well; that the less certain a forecast is, the more likely it is to be true. Yet this only represents a conjunctive truth of probability. The less certain and vaguer a forecast is, the more probabilities its set contains, making its many outcomes mathematically more likely than any single outcome alone. These super-accurate (yet less informative) predictions don’t make for good stories and certainly don’t make for good news. Our emotionally contaminated risk calculator seems inclined to prefer compelling narrative accounts of the future, Hollywoodized disaster sagas, tales of individual rather than collective human suffering. Indeed, as the communications scholar Stephen O’Leary has written on the social practices of apocalypse rhetoric, “…the normative function of catastrophic predications appear to be as significant as their accuracy.”
These catastrophe narratives tend to paper over endemic structural risks that, left to their own devices, gather malicious momentum. These catastrophes appear to “come out of nowhere,” bursting into most people’s awareness through the vehicle of a shocking media story, a nightly news headline, a flurry of social media buzz. Then later, we hear of the dead coalmine-canaries and the messages of the many plaintive Cassandras whose warnings were repeatedly ignored or silenced. Why didn’t anyone listen to them? Nassim Taleb refers to these apparently “low-probability” events as Black Swans in his book of the same name, which are statistically long-tail events that represent quantities of hidden systemic risk. The concept comes from probability where a game is played repeatedly to determine the likelihood of winning, but where the variable costs of winning and losing determine the value of playing the game. But how does this really work?
If you knew you could win a penny from a dice-roll that came up any even number, but any odd number would award you a swift kick in the teeth then the costs of losing would likely outweigh the rewards of winning for any single game-play. Thus, we need to remember that the ‘risk of losing’ is always a cost of gameplay, even when you win. These dynamics become much less obvious for something like the lottery, where the odds of winning are so astronomically low that almost any cost shouldn’t be justifiable, yet retains popularity among millions. The tricky part is assessing ‘net-risk’ before you’ve played the game. Eliezer Yudkowsky, an expert on artificial intelligence who has written extensively on heuristics points out examples such as Long-Term Capital Management, where Nobel Prize-winning economists employed an investment strategy that ended up bankrupting a billions-dollar company. This notion of “hidden risk” (along with perhaps just a dose of overconfidence) accounts for why indisputably competent individuals can gamble with such precarity. The actual probabilities are illustrative, as Yudkowsky urges us to consider “… a financial instrument that earns $10 with 98% probability, but loses $1000 with 2% probability; it’s a poor net risk, but it looks like a steady winner.”
There are a few lessons available from posing the question of volcano insurance. First, don’t take the word of overconfident experts without a hefty grain of salt. Even Socrates was a mere mortal. You’re smarter if you assume that it won’t rain (but keep an umbrella handy) than if you ever read another weather report. Second, don’t mistake skepticism for an answer. One interesting thing about the reflexivity of risk analysis (the ability of forecasters to compensate for their own fallibility) is that it has a tendency to reinforce pre-existing biases, as we selectively apply our skeptical reasoning to conclusions that we’d prefer not to be true. It gets a little better if we accept that we’re going to tend to believe what we want to believe, and that we tend to look for what we’ve seen before. But the truth of the future remains forever out of reach.
This leads to a troubling question, perhaps undecidable, that I leave open to the reader: if something extremely unlikely actually happens, is it more likely that a) it really was extremely unlikely to begin with, but happened anyway or b) that our earlier predictions of its likelihood were underestimated its actual risk? How do we even step far enough outside of ourselves to answer that question? Second-order probabilities and Bayesian analysis still leave the initial question begged. Now we’re living in a world of insurance insurance and probabilities of probabilities. We are left with only cost-benefit analysis to weigh the costs and benefits of using cost-benefit analysis. There are definitely some self-evident problems of infinite recursion here. You don’t magically step outside of subjectivity just by adding more reflexive layers of anchored calculations or subjective guesswork, regardless of factual basis. Philip Tetlock’s decades-long study of expert predictions suggests that random guessing often outperforms talking heads, and even our wisest sages tend to be more wrong than they’re right. So should you buy volcano insurance? More importantly, can you overcome futuredumb without costing yourself the present? Your informed guess is as good as mine.
About Edmund Zagorin
Edmund is a Detroit-based paradox enthusiast and entrepreneur, specializing in video and interactive event production.