Originally published on pelicancrossing.net on July 14th, 2017
In 1957, the BBC’s flagship current affairs program, Panorama, broadcast a story about that year’s extraordinarily bountiful spaghetti harvest, attributed to the “virtual disappearance of the spaghetti weevil” (it says here in Wikipedia). It was, of course, an April 1 hoax, and apparently up there with the 1937 War of the Worlds radio broadcast if it’s still being pulled out in 2017 as a pertinent precursor to a knotty modern problem, as Baroness Patience Wheatcroft did yesterday at a Westminster Forum discussion of fake news (PDF). In any case, it appears that national unfamiliarity with that foreign muck called pasta meant that many people believed it and called in asking how to grow their own spaghetti trees.
Parts of the discussion proceeded along familiar lines. Some things pretty much everyone agreed on. Such as: fake news is not new. Skeptics have been fighting this stuff for years. There has long been much more money in publishing stories promoting miracles than there ever will be in debunking them. Even if belief in spaghetti trees has died in the face of greater familiarity with the product, hoaxes are perennially hard to kill. In 1862 Mark Twain found found that out, and in the 1980s so did science fiction critic David Langford.
Everyone also converged on a consistent meaning of “fake news”, even though really it’s a spectrum whose boundaries are as smudged as Wimbledon’s baselines this week. People publish stories that aren’t true for all kinds of reasons – satire, parody, public education, journalistic incompetence – but the ones everyone is exercised about are stories that are intentionally false and are distributed for political or financial gain. The discussion left a slight gap there, in that doing so just for the lulz doesn’t have a fully political purpose and yet is a very likely scenario. But close enough.
Skeptics’ experience shows that every strategy you adopt for identifying genuine information will be emulated by others seeking to promote its opposite: you have scientists, they have scientists. We know this from the history of Big Tobacco and Big Oil. This week, Google was accused of funding research favorable to its interests in copyright, antitrust law, privacy, and information security, a report Google calls misleading.
Similar problems apply to the item everyone thought had to form part of the solution: teach digital literacy. Many suggested it should form part of the primary school curriculum, and sure, let’s go for it, but human beings teach these things. Given that political polarization has reached the point where Fox News viewers and New York Times readers cannot agree on even the most basic of facts about, say, climate change or American health care, what principles do you give kids by which to determine whom to believe? What does a creationist teach kids about judging science stories? Wikipedia ought to be the teacher’s friend because its talk pages lay out in detail how every page was built and curated; instead, for years many have told kids to avoid “unreliable” Wikipedia in favor of using a search engine to find better information. The result: they trust Google without understanding how it works. A more subtle problem of provenance was raised by Matt Tee, the CEO of the Independent Press Standards Organisation , who said that on social media platforms, particularly Facebook, all news stories look alike, no matter where they’re from. More startling was Adblock Plus’s Laura Sophie Dornheim’s claim that ad blockers can help by interfering with the business model of clickbait farms. To an audience seeking solutions but to whom the loss of advertising revenue was an important part of the problem, she was a disturbing bit of precipitate.
Inevitably there was discussion of regulation. Leaving aside whether these companies are platforms, publishers, or some kind of hybrid, the significant gap in this and most other discussions is the how. The image in our minds matters; for the foreseeable future this won’t be solved by computers. Instead, as Annalee Newitz recently reported in Ars Technica, the world’s social media content raters are humans, many of them in countries like India, where Adrian Chen and Ciaran Cassidy followed a two-week rating training course and the Philippines. Observes an unidentified higher-up, “You definitely need man behind the machines.”
This is what efforts to control fake news – a vastly more complex problem – will also look like. GAFAT et. al may be forced to hire expensive journalists and scholars to figure out what the rules for identifying fake news should be, but ultimately these rules will be put into practice by an army of subcontractors far removed from the “us” who are being protected from it. There are bound to be unintended consequences.
Fake news is yet another way that our traditional democratic values are under threat. Even small terrorist attacks have provided justification for putting into place a vast surveillance framework that’s chipped away at our values of privacy and the right to travel freely. Everyone yesterday was conscious of the threat to freedom of expression attempts to disappear fake news may represent. But, like computer security, fake news is an arms race: those intent on financial gain and political disruption will attempt to turn every new system to their advantage. Computer scientists cannot solve today’s security problems without consulting many other disciplines; the same will prove true of the journalists, media professionals, and scholars who are fretting about our very human tendency to go “Ooh, shiny!” at entertaining lies while putting off reading sober truths.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.