Originally published on pelicancrossing.net
There’s plenty to fret about in the green paper released this week outlining the government’s Internet Safety Strategy (PDF) under the Digital Economy Act (2017). The technical working group is predominantly made up of child protection folks, with just one technical expert and no representatives of civil society or consumer groups. It lacks definitions: what qualifies as “social media”? And issues discussed here before persist, such as age verification and the mechanisms to implement it. Plus there’s picky details, like requiring parental consent for the use of information services by children under 13, which apparently fails to recognize how often parents help their kids lie about their ages. However.
The attention-getting item we hadn’t noticed before is the proposal of an “industry-wide levy which could in the future be underpinned with legislation” in order to “combat online harms”. This levy is not, the paper says, “a new tax on social media” but instead “a way of improving online safety that helps businesses grow in a sustainable way while serving the wider public good”.
The manifesto commitment on which this proposal is based compares this levy to those in the gambling and alcohol industries. The The Gambling Act 2005provides for legislation to support such a levy, though to date the industry’s contributions, most of which go to GambleAware to help problem gamblers, are still voluntary. Similarly, the alcohol industry funds the Drinkaware Trust.
The problem is that these industries aren’t comparable in business model terms. Alcohol producers and retailers make and sell a physical product. The gambling industry’s licensed retailers also sell a product, whether it’s physical (lottery tickets or slot machine rolls) or virtual (online poker). Either way, people pay up front and the businesses pay their costs out of revenues. When the government raises taxes or adds a levy or new restriction that has to be implemented, the costs are passed on directly to consumers.
No such business model applies in social media. Granted, the profits accruing to Facebook and Google (that is, Alphabet) look enormous to us, especially given the comparatively small amounts of tax they pay to the UK – 5% of UK profits for Facebook and a controversial but unclear percentage for Alphabet. But no public company adds costs without planning how to recoup them, so then the question is: how do companies that offer consumers a pay-with-data service do that, given that they can’t raise prices?
The first alternative is to reduce costs. The problem is how. Reducing staff won’t help with the kinds of problems we’re complaining about, such as fake newsand bad behavior, which require humans to solve. Machine learning and AI are not likely to improve enough to provide a substitute in the near term, though no doubt the companies hope they will in the longer term.
The second is to increase revenues, which would mean either raising prices to advertisers or finding new ways to exploit our data. The need to police user behavior doesn’t seem like a hot selling point to convince advertisers that it’s worth paying more. That leaves the likelihood that applying a levy will create a perverse incentive to gather and crunch yet more user data. That does not represent a win; nor does it represent “taking back control” in any sense.
It’s even more unclear who would be paying the levy. The green paper says the intention is to make it “proportionate” and ensure that it “does not stifle growth or innovation, particularly for smaller companies and start-ups”. It’s not clear, however, that the government understands just how vast and varied “social media” are. The term includes everything from the services people feel they have little choice about using (primarily Facebook, but also Google to some extent) to the web boards on news and niche sites, to the comments pages on personal blogs, to long-forgotten precursors of the web like Usenet and IRC. Designing a levy to take account of all business models and none while not causing collateral damage is complex.
Overall, there’s sense in the principle that industries should pay for the wider social damage they cause to others. It’s a long-standing approach for polluters, for example, and some have suggested there’s a useful comparison to make between privacy and the environment. The Equifax breach will be polluting the privacy waters for years to come as the leaked data feeds into more sophisticated phishing attacks, identity fraud, and other widespread security problems. Treating Equifax the way we treat polluters makes sense.
It’s less clear how to apply that principle to sites that vary from self-expression to publisher to broadcaster to giant data miners. Since the dawn of the internet any time someone’s created a space for free expression someone else has come along and colonized a corner of it where people could vent and be mean and unacceptable; 4chan has many ancestors. In 1994, Wired captured an early example: The War Between alt.tasteless and rec.pets.cats. Those Usenet newsgroups created revenue for no one, while Facebook and Google have enough money to be the envy of major governments.
Nonetheless, that doesn’t make them fair targets for every social problem the government would like to dump off onto someone else. What the green paper needs most is a clear threat model, because it’s only after you have one that you can determine the right tools for solving it.
Illustrations:: Social network diagram.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.