, , ,

Kerstin Dautenhahn is Professor of artificial intelligence at the University of Hertfordshire. Her research interests include social learning, human-robot interaction and social robotics. She is the co-creator of KASPAR, a robot designed to help children with autism develop communication skills.
Making the third presentation at this event, Kerstin explained her background in biological cybernetics, and the ways that her work revolves around the interactions between humans and robots/AI, concerned particularly with systems that aim to help people.
She was concerned to be immediately clear: robots are not people.
Elaborating, she pointed out that each robot you encounter is a de novo creation, not only lacking a common biological heritage with humans – making them unavoidably alien, but without any necessary shared characteristics (either deep or apparent) with any other robot.
Further, now and for the foreseeable future (in her opinion), robots have no psychology – there is no ‘mind’ in there.
The term robot, then, is a moving target, without a firm definition (I was surprised that we weren’t reminded of the etymological origin of the word ‘robot’ in the Czech word for ‘slave’), so that any conversation about robots must be particularly clear as to terms. This, however, is difficult, because of two very strong human traits;

  • the anthropomorphic tendency – whereby humans ascribe agency to inanimate objects given the most minimal of suggestive cues,
  • our narrative mode of thought – the tendency to construct coherent stories – to ascribe meaning, to any sequence of events – again, given the thinnest of material.

Dautenhahn talked us through her experiments with the Robot House – an experiment born from the realisation that most human/robot interaction research takes place in ‘strange’ laboratory settings, undermining its findings.
The Robot House is a typical 3 bedroom UK house, with two stories and normal furniture and fittings, but which has been wired with many kinds of sensors, so that human/robot interaction can be captured as ‘naturally’ as possible.
We were showed an experiment whereby volunteers entering the house were met by a robot (previously unknown to them), which spoke to explain that it belonged to a friend of theirs, who was absent, and requested help in completing some tasks the friend had set it.

During a first phase, the robot either functioned ‘correctly’ or was obviously ‘strange’ – making nonsensical statements or requests. Then, the robot asked the volunteers to do various questionable things on the friend’s behalf – such as;

  • throw a pile of unopened letters into a wastebasket,
  • water a houseplant with orange juice,
  • open the friend’s laptop,
  • use the friend’s online passwords – supplied by the robot.

Although participants were easily able to identify the robot’s behaviour as correct or faulty (and responded to ‘correctness’ with stronger anthropomorphic language), this judgement strangely had little impact on their compliance with the odd behavioural requests:

Letters Plant Laptop Passwords
Non-Compliant 10 30 0 0
Compliant 90 60 100 100

These results underline Dautenhahn’s opening stricture – robots are not humans – with the evidence that people don’t respond to robots as they would to humans. People ascribing faultiness to a newly encountered robot nevertheless comply with questionable instructions from that same robot, to the extent of borderline unethical behaviour. Later, participants justified these actions – which they understood were questionable, in a wide variety of ways – they knew ‘why’ they had done these things, or post-rationalised their actions.

What was especially interesting about this was that, when we heard these results, it was clear from the audience reaction that most of us considered the participant’s behaviour inexplicable – and yet, given that the participants were ordinary people, just like us, there seems no reason to expect that we would have behaved ‘better’.

[I suspect that fraudsters and other malefactors will ab/use this response pattern in the coming years, with results that may well seem profoundly strange, and which may be hard to protect against. My reasoning goes like this:-

  • Hearing the account of the experiment, we considered the robot as a machine: did NOT anthropomorphise it – allowing us to find it incomprehensible/amusing that the humans had acted as they did.
  • This implies that it will be difficult for humans to teach/learn behaviours that minimise adverse outcomes – because it will be very hard to recreate the real experience of interacting with a robot (we will not anthropomorphise the ‘abstract’ robot in the ‘teaching’, but will be unable to avoid doing so in real encounters).
  • Even if this is achieved with one robot, the lack of shared characteristics between robots will likely render any particular approach invalid despite apparently similar circumstances.]

Lastly, Dautenhahn showed us KASPAR, a robot designed to interact directly with autistic children. Several videos showed that the carefully designed reactions of the robot – simplified and predictable versions of human emotional and social responses – helped children learn to interact, to express their own feelings, and to recognise those of others. Although Kaspar is experimental, there have been so many requests for the robot to be available that there is a project to support general production.