“Models are best for understanding, but they are inherently wrong,” Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland’s Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?
This was one of the projects presented at this week’s conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand “radical uncertainty”, start with Frank Knight, who in 1921 differentiated between “risk”, where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as “I know what I don’t know” versus “I don’t know what I don’t know”, evoking Donald Rumsfeld’s “unknown unknowns”. In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you’re playing with live ammunition – real people’s lives. By the million, maybe.
How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.
It’s the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With “weather” problems, you make many similar decisions based on changing input; with “climate” problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can’t learn from your mistakes: radical uncertainty. You can’t reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand – and improve – those processes.
This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for “optimum choice framework”. The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results – think cost-benefit analysis. In radical uncertainty…see above. But decision makers are tempted to build a model anyway. Smith said, “You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem.” In his hand-drawn graphic, this is falling off the “cliff of subjectivity” into the “sea of self-delusion”.
Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK’s entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You’ll need a battery or solar-powered radio to hear the national emergency broadcast.
The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don’t* fill your bathtub with water so no one else has any? *Don’t* go to the hospital, which has its own generators, to charge your phone?
“It’s a difficult question because of the intention-behavior gap,” de Meyer said. De Meyer is studying this via “playable theater”, an effort that starts with a story premise that groups can discuss – in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.
In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired…or unemployed. The numbers in these predictions are very large, but conflicting, so they can’t all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?
Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution – “but for others, not for themselves”.
Here’s the thing: machines are models. They live in model land. Our future depends on escaping.
Illustrations: David Tuckett and Lenny Smith.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.