Originally published on pelican crossing.net on the net.wars column
Before there was the internet there were commercial information services and conferencing systems. Since access to things like news wires, technical support, and strangers across the world who shared your weird obsession with balloon animals was scarce, they could charge rather noticeable amounts of money for hourly access. On services like CompuServe and AOL, and Prodigy, discussion areas were owned by independents who split the revenue their users generated by spending time in their forums with the host service. So far, so reasonably fair.
What kept these forums from drowning in a sea of flames, abuse, bullying, and other bad behavior was not what today’s politicians may think. It was not that everyone was real-world identified because they all had to pay by credit card. It was not that you had higher-class, because wealthier early adopters, people. And it was not because so many were business people who really needed access to stock quotes, technical support, and balloon animal news. It was because forum owners could trade free access to their forums for help with system administration. Volunteer SysOps moderated discussions, defused fights, issued warnings and bans for bad behavior, cleaned out inappropriate postings, and curated files.
Then came the internet with its monthly subscription fees for however much data you used and its absence of technical controls to stop people from putting up their own content, and business models changed. Forum owners saw their revenues plummet. The value to volunteers of their free access did likewise. Forum participation thinned. AOL embraced advertising, dumping the niche sites whose obsessive loyal followings had paid such handsome access fees in favour of mainstream content that aggregated the mass audiences advertisers pay for. Why have balloon animals when you can have the cast of Friends?
Tl;dr: a crucial part of building those successful businesses was volunteer humans.
I remember this every time a site shuts down its comment boardbecause of the volume of crap. This week, at Ars Technica, writer and activist Annalee Newitz found a new angle with a piece about Google’s raters. Newitz finds that these folks are paid an hourly rate somewhat above minimum wage, though they lack health insurance and are dependent on being logged in when tasks arrive.
The immediate reason for her story was while Google is talking about deploying thousands of raters to help fix YouTube’s problem with advertisers and extremist videos, this group’s hours are being cut. The exact goals are murky, but the main driver is apparently to avoid loading their actual employer, to which Google subcontracts this part of its operation, with a benefits burden that company can’t afford. Much of the story is a messy tale of American broken healthcare system. However, in researching these workers’ lives, Newitz uncovers Sarah Roberts, a researcher at UCLA who has been traveling the world to study raters’ work for five years. What has she found? “Actually their AIs are people in the Philippines”
So again: under-recognized humans are the technology industry’s equivalent of TV’s funny friend. In 2003, on a visit to Microsoft Research, I was struck by the fact that although the company was promoting its smart home, right outside it was a campus run entirely by human receptionists who controlled access, dispensed information, and filled their open hours with small but useful administrative projects.
This pattern is everywhere. Uber’s self-driving cars need human monitors to intervene approximately once every 0.8 miles. Google Waymo’s cars perform better – but even so, they require human aid at the far more dangerous rate of once every 5,000 miles. Plus the raters: on Google, obviously, but also Facebook and myriad other sites.
The goal for these companies is rather obviously that the human assistance should act as training wheels for automation, which – returning to Newitz’s piece – is a lot easier to do if they’re not covered by employment laws that make them hard to lay off. There is an old folk song about this: Keep That Wheel a-Turning.
In the pre-computer world, your seriousness about a particular effort could be judged by the number of humans you deployed to work on it. In the present game, the perfect system (at least for technology companies and their financiers) would require no human input at all, preferably while generating large freighter-loads of cash. WhatsApp got close: when Facebook bought it, it had 55 employees to 420 million users worldwide.
Human moderation is more effective – and likely to remain so for the foreseeable future – but cannot scale to manage 1.2 billion Facebook users. Automation is imperfect, but scalable and immune to post-rating trauma. Which is why, Alec Muffett points out, the Home Affairs Select Committee’s May 1 report’s outraged complaint that Google, Facebook, and Twitter deploy insufficient numbers to counteract online hate speech is a sign that the committee has not fully grasped the situation. “How many “staff” will equate to a four-thousand-node cluster of computers running machine-learning / artificial intelligence software?” he asks.
It’s a good question, and one we’re going to have to answer in the interests of having the necessary conversations about social media and responsibility. As Roberts says, the discussion is “incomplete” without an understanding of the part humans play in these systems.
Illustrations: HAL, from Stanley Kubrick’s 2001: A Space Odyssey; Annalee Newitz; Sarah Roberts (UCLA).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.