Could you kill a robot?

Could you? Not if you had to because it was running amok, but just because…for no real reason. How about maiming one? Is there a moral issue involved? Or is it a daft question considering that a robot has no consciousness or sentience.

Is destroying a robot the same as killing it? Which word generates the stronger emotional reaction?

boston dynamics robot

These might seem like irrelevant questions but actually they are important considerations as we prepare ourselves for the coming tide of robotics. Much is written about what will happen to the job market, and that is important too of course, but there are also big issues of a more personal nature. What will it mean to live in a society where artificial or fake sentience is commonplace? What will happen to our definitions of humanity if artificial life can evoke the same responses from humans as real biological life does.

Wikipedia defines sentience like this…

Sentience is the ability to feel, perceive, or to experience subjectivity

The overarching question is whether or not the appearance of sentience will affect our societal codes, both legislated and social, and if so, how?

To kick off this brief (very brief) inquiry have a quick look at this clip from the Graham Norton show. Graham is showing Gary Oldman, Toni Collette and Nick Frost a £12,000 mini robot, who quite frankly is very sweet, for want of a less cloying term.

That robot certainly has an uncanny ability to generate real empathy from the human audience.

At the point that it lifts its hand, like a child, to walk with Norton there is a sincere and substantial ‘aww’ from the audience, it is palpably emotive. When it falls and says ‘help me’, few people hear it (they are laughing at something else), but if you re-watch the clip and try to hear ‘help me’, it is strangely affecting.

There are other surprises too, when the technology is being stretched, the humanity disappears. At the point that it performs Gangnam style, I feel the empathy goes totally AWOL, its impressive, I guess, but in a different way, it is no longer pulling on the heartstrings, it’s talking to a different part of me.

Language is important

It would seem that the markers of human discourse, or at least a certain type of human discourse, are more subtle and less grand. In the biological realm it’s long been understood that there is a particular class of ‘linguistic tool’ in normal human speech called discourse markers, that when used facilitate a more courteous exchange. These are the small utterances that appear as, often self-deprecating filler, when humans talk to each other. For example people often say ‘probably’ , ‘I think’ , ‘maybe’ even though they harbour no such doubts about what they are saying. I did it, without realising, in the previous paragraph when I wrote, “its impressive, I guess, but in a different way”

android and human

This article argues that mastering such ‘simplicities’ of language (the language is humanly simple, the mastery of it in a technical architecture anything but) will be a major feature of effective human – robot communication.

For my money this idea seems to infer that the replication of human idioms within robotic architectures can generate sincere human responses. If the robots act, in certain ways, like humans then it seems that our autonomous thinking systems (which is most of our thinking apparatus), will treat them like humans.

They made videos of an “amateur baker” making cupcakes with a helper giving advice—in some videos the helper was a person, and in some videos the helper was a robot. Sometimes the helpers used hedges or discourse markers, and sometimes they didn’t.

You can probably already guess that the humans and robots that used hedges and discourse markers were described in more positive terms by the subjects who watched the videos. It’s somewhat more surprising to hear that the robots that hedged their speech came out better [than] the humans who did.

The author credits the novelty of robots as teachers, for their superior performance in this experiment. That makes some sense, and is testable too, as robotic interaction becomes commonplace. But for the moment I am more intrigued about understanding the action of our supposedly  autonomous brains, largely because if autonomous sub-conscious reactions are driving these empathetic feelings then once novelty wanes we can expect an even more seamless set of autonomic interactions with our robot compatriots. This line of thought suggests that the division between human – human interactions and human –robot interactions will become more, not less, opaque.

There are other confounding ideas to consider.

Recently a Washington Post journalist was called by a telemarketer selling health insurance, seemingly a pleasant, bright young woman. Something, however, felt a little off, he thought she was a robot. The call was recorded.

He, in my opinion is slightly aggressive, as he tests his inkling by repeatedly asking if she is a robot, to which she repeatedly answers “I am a real person”. You can see why he became suspicious, she does sound as though she could be a robot. But here’s the thing. So does he. Some of his challenges have a strangely artificial hue as well. My conjecture is that as a result of believing that he is talking to a robot his use of discourse markers is reduced, this would also explain why his conversation comes across as unduly aggressive.

As it turns out she wasn’t a robot. Well, not quite. She was a program used by call operators with weak English, or strong accents. So, there is in fact a human triggering the interactions, which explains the pauses and the repeated phrasings, clearly the operators have a ‘script’ of some kind, managed by the software which limits the conversational creativity that can be brought to bear. It also demonstrates quite clearly that our perceptions in the realm of human – robot interactions are not 100% effective.

At one level we now have 3 choices to consider (human – human, human – robot, human – hybrid), on a more challenging level we need to consider the pitfalls and troubles that these complications could cause if our perceptions fail us and we get our sub-conscious analysis wrong.

Being rude, to a real person, which is what happened here, is just one such problem.

However, to take it to the extreme, if you can’t distinguish between artificial and human communication in a world where faceless communication is increasingly becoming the norm, then what is the nature of humanity in this faceless future? Is there a meaningful division at all?

robot bar tender

So, back to killing a robot….it’s really not so cut and dry as it might have seemed. Robots have the ability to play with your emotions and you might be utterly unprepared for that.

Kate Darling is a researcher who is looking at robot ethics (hooray). Alongside safety, liability and privacy Kate also spends time looking at the social side of human – robot interaction.

She recently ran a popular session at Harvard. She tackles the killing question head on.

“….she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real”

There are some interesting times up ahead for us all. I’m glad that people like Kate are sufficiently forward thinking to start the dialogue now. It’s almost guaranteed that we won’t be ready, but at least we won’t be totally unaware.



Leave a comment