Originally published on pelicancrossing.net
The late Simon Hoggart once wrote that the reason to take the trouble to debunk apparently harmless paranormal beliefs was this: they were “background noise, interfering with the truth”. There were, at the time, many people who said that *of course* they did not take seriously the astrology column they read every morning. It was “just for fun”.
And that was probably true or mostly true. I do think, in that humorless Skeptic way that these things can establish an outpost of uncertainty in your brain. But so many trends and habits have led to the current bust on fake news that it’s hard to pick just one. The most wide-ranging discussion of this is over at science fiction writer Charlie Stross’s blog.
Stross’s main argument concerns Twitter and Facebook: what succeeds on platforms that have nothing to sell but eyeballs to advertisers is emotional engagement. Shock, fear, anger, horror, excitement: these sell, and they have nothing to do with reason or facts. Stross argues that these factors were less significant in traditional media because there was a limited supply of advertising space and higher barriers to entry. I’m not so sure; it has often seemed to me that the behavior on social media is just the democratization of tactics pioneered by Rupert Murdoch and the Daily Mail.
The fact that a small group of teens in a Macedonian town can make moneypushing out stories they know are fake that may influence how millions of Americans think as they go to the polls on Election Day…that’s new.
That those teens and other unscrupulous people are funded by large American businesses via advertising networks…that’s also new.
The spotlight has unearthed some truly interesting things. In the Observer, Carole Cadwalladr used Google’s autocomplete search suggestions to unearth what she describes as a vast, three-dimensional factless parallel universe that is gradually colonising the web. In a follow-up, she says Google refused to discuss matters but has quietly made some adjustments. Gizmodo reportedthat Google’s top hit in response to the question “Did the Holocaust really happen” is myriad links to the white supremacist neo-Nazi group Stormfront. Wikipedia’s explanation of Holocaust denialism is hit number four. Google told Gizmodo that while it’s “saddened” that hate groups still exist, it won’t remove the link.
It’s always a mistake to attribute a large phenomenon to a single cause. There are many motives why people create individual fake news stories: money, like the Macedonian teens; interference with the US election, as US intelligence agencies say was intended; hijinks; political activism. It is exactly the same pattern we’ve seen with computer hacking; there are tools to commit news hacking at all levels from script kiddies to state-sponsored, high-level experts, and motives to match.
The strategy to create the political aspect of this was clearly outlined in the 2011 documentary Astroturf Wars. In touring American Tea Party country, Australian filmmaker Taki Oldham found right-wing experts teaching attendees at conferences how to game online ratings and reputation systems to ensure that the material they liked rose to the top and material they didn’t (such as any documentary made by Michael Moore) sank into invisibility. People didn’t even have to read or view the material, the trainer said, showing them how to use keywords and other metadata to “give our ideals a fighting chance”.
Now, five years later, everyone is sorry. Or at least, sorry enough to be hatching various schemes: flagging, labelling, rating, fact-checking, and so on. Probably eventually these first steps will be gamed, too, and we’ll have to rethink, but at least there are first steps.
What really needs to change, however, is the thinking of the people who own and deploy these systems. As cyberspace continues to bleed into the physical world, thinking through consequences before building and deployment becomes increasingly important, and it’s something Silicon Valley in particular is notorious for avoiding. “Do first, ask permission later”, they. So this week Uber sent out some unlicensed self-driving cars onto the streets of San Francisco. Accidents ensued. Uber bizarrely says the presence of humans in the cars means the company doesn’t need permits and attributes the accidents to “driver error”. Well, was the car self-driving or not? Was this a real launch or a marketing stunt gone wrong? Does the company have no lawyers who worry about liability?
You may love Uber or avoid them based on stories like that one or this week’s other news, in which a former forensic investigator for the company accused its staff of spying on customers. Either way, the we-are-above-the-law attitude is clear, and Uber is only the latest example.
Fake news is distributed by computers working exactly the way they are supposed to when large automated systems replace a previously labor-intensive business. The problem is not that today’s ad agencies are robots that don’t care about us. It’s not the robots who don’t care if a story might sufficiently persuade someone that Hillary Clinton is doing bad things via a pizzeria that they should load up the assault rifle and go “self-investigate”. The problem is that these systems are humans all the way down.
The problem is the humans who programmed the robots. First, we need *them* to care.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.