net.warsWriting

Unreal humans

By Wendy Grossman (originally posted on net.wars on August 21, 2015)

Over thereBill_Thompson,_BBC,_at_Wikimania_2014_-_14876124081.jpgthe BBC’s Bill Thompson has a glowing review of the TV series Humans. The obvious irony in the title is serendipitously expanded by the simultaneous airing in the US of Mr RobotHumans is about robots and Mr Robot is about humans. Sort of.

For the uninitiated, Humans is a British remake of the Swedish series Äkta Människor. In an alternative present, humanoid androids – “Synths” – have spread throughout society. Synths have brilliant green eyes and slightly stiff movements. There has been much progress since the earliest models, which are “recycled” for parts, brains wiped.

The Synth business is well-developed, with stores, service departments, and second-hand outlets. The national social care service deploys Synths as home carers – part nurse, part companion, part jailer. As the story opens, Joe Hawkins (Tom Goodman-Hill), a harassed father with three kids and a traveling wife, buys a black-market reprogrammed Synth named Anita (Gemma Chan) at a knockdown price. The perfectly shaped Anita’s perfect impersonation of a perfect housekeeper and nanny exposes the family’s inner troubles, which of course rebound on…well, it or her? That is the question.

No such series is complete without exceptions, and this one is no…exception. Part of the action follows the travails of a small band of Synths that are secretly endowed with human-level consciousness. It is these Synths that Thompson defended, arguing that programming Isaac Asimov’s Three Laws of Robotics into positronic brains is morally indefensible, equivalent to shackling a slave or caging a gorilla. Substitute something real for “positronic”. Either way, I don’t think it’s a fair comparison..

However, as I went on to argue in a comment, possibly wrongly, there’s a better reason not to implement Asimov’s First Law: it seems technically impossible. Granted, I’m not a mathematician who can produce an elegant proof, but intuitively…

Alan Turing and his successors long ago showed that it is impossible to create an algorithm that can tell whether a given computer program will halt – that is, complete. In writing my comment, I had a dim idea that somehow that led to making Asimov’s Laws non-computable. This much I know: there is a class of computer problems that can provably be solved in a reasonable amount of time (“polynomial time”, meaning that the amount of time needed to solve the problem doesn’t expand outrageously as you increase the size of the input to the algorithm). That class is called P. There is a second class of problems for which there is no known way to find an answer in that reasonable amount of time but for which a possible solution can be *verified* quickly; that class is NP. The big question: does P equal NP? That is, are the two sets identical? In 2013, Elementary (Season 2, episode 2, “Solve for X”) incorporated this open problem into a murder mystery: brave stuff for prime-time TV.

The class of problems known as “NP-complete” are provably not solvable in a reasonable amount of time. In 1999, Rebecca Mercuri proved that securing electronic voting was one of them. But securing a voting machine is vastly simpler than incorporating all the variables flying at an Asimov-constrained Synth trying to decide what action is appropriate to solve the danger facing this human at this millisecond. How many decision trees must a Synth walk down? How far ahead should it look? How does it decide whose interests take priority? How do you keep it from getting so tangled up in variables that it hangs and through inaction allows a human to come to harm?

I don’t think it can be done. But even if it could…granted that it’s a sign of my inner biological supremacist, or what an io9 discussion of the unworkability of the Three Laws calls “substrate chauvinism”, comparing Asimov’s Laws to shackles and cages applied to the breakable bodies housing biological intelligences is getting carried away by the story. Great news for writers Sam Vincent and Jonathan Brackley, and the very human actors who evoked such emotion while embodying machines.

The real issue is that humans can anthropomorphize anything. We become attached to all sorts of inanimate objects that are incapable of returning our affection but that we surround with memories. The show’s George (William Hurt), is profoundly attached to his outmoded Synth, Odi, precisely because Odi stores all the memories his aging mind has lost. Anthropomorphism is a recurring theme at We Robot (see here for 2015 2013, and 2012, as well as Kate Darling‘s work on this subject: she cites a case in which a colonel called off a mine-locating exercise because he thought the defusing robot’s hobbling on its two remaining legs was “inhumane”. Issues stemming from projection of this kind will be with us far sooner than anything like the show’s androids will be. Let’s focus on that, first.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web sitehas an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.

Related Articles

One Comment

  1. Thanks Wendy for pointing out the computational intractability of Asimov’s laws. But I think you could have been harsher in general about the ignorance and hazard of basing moral arguments on fiction. I’ve been writing about the bizarre and frankly hazardous tendency of humans to ascribe moral subjectivity to robots since 1996; my academic papers on the topic are listed here: http://www.cs.bath.ac.uk/~jjb/web/ai.html I’ve also been writing blog posts to try to make these ideas more generally accessible, here are two relatively successful ones: http://joanna-bryson.blogspot.com/2015/03/robots-are-more-like-novels-than.html (robots are more like novels than children) and http://joanna-bryson.blogspot.com/2015/10/clones-should-not-be-slaves.html (Clones should not be slaves). Because what Humans shows could only be achieved by cloning. There’s no mechanically engineered, computer programmed robot that would be so human.

    Comment submitted by Joanna Bryson

Back to top button