Further to a recent post about the ethics of autonomous robots, it seems military robots are not the only kind that can kill, allbeit by ‘mistake’. In Japan there are already robots that feed the elderly and baby-sitting robots in shopping centres. So who exactly should be held responsible when they go wrong? It’s an issue that has concerned Noel Sharkey of Sheffield University for a while (he and Ronald Arkin were interviewed for the radio recently), and now the Royal Academy of Engineering has weighed in with a discussion report.
Autonomous Systems: Social, Legal & Ethical Issues, commissioned by the Academy’s Engineering Ethics Working Group, is online at http://www.raeng.org.uk/autonomoussystems
It’s an interesting read, but it doesn’t begin to ask the kind of questions grid-group cultural theory might….
In particular, since it has a mono-polar view of ethics, with no conception of the nuances of cultural plurality, it cannot avoid promoting one particular cultural bias over and against others:
should certain avenues of research be abandoned because there is significant objection to the idea of that research – or is technology push sometimes the right thing? After all, there were objections to the idea of heart transplants when this was a new technology but it is now an accepted and valued area of medical practice. (p. 12)
The aim seems to be that public discussion should take place now (a laudable aim, surely) so it becomes easier to stop opponents from trying to outlaw certain types of technology (perhaps a less laudable aim, especially if you yourself happen to be one of the nay-sayers).
It’s true that we accept all sorts of technological danger now that we didn’t use to. Railways were regarded early on as highly dangerous, while the first bicycle actually hit a passer by on it’s maiden journey, causing the inventor to be fined (the official doing the fining was so impressed with the invention that he paid the fine himself!). When escalators were set up on the London Underground, a one-legged man was employed to ride up and down them all day just to persuade people they were safe. And as for the dangers of cars – one hardly knows where to start, and yet we accept these killing machines as a normal part of everyday life.
Risk is a highly cultural phenomenon and when the discussion on autonomous robot risk gets properly started we’ll see it has four conflicting tendencies:
- Professionalising approach – risks can and should be managed by experts (all, no doubt, paid up members of the Academy of Engineering), with regulations limiting, but not avoiding, liability.
- Randomising approach – impacts are inevitable; reframe risk as ‘bad luck’ .
- Externalising approach – customer sovereignty demands that customers also sign away any supposed right to protection. Providers should not be expected to foot the bill for excessive regulation – it would make them uncompetitive.
- Demonising approach – robot autonomy is seen as ‘unnatural’ and in certain cases to be banned. Ethics exists to limit the irresponsible ambitions of technology, and to instil realism/scepticism into the concept of ‘progress’.
With this framework in view, inspired by grid-group cultural theory’s four cultural biases, it’s clear that the Academy’s report, along with the Academy itself, adopts already a strongly professionalising, Hierarchical approach (as, arguably, it should), while fearing and seeking to pre-empt a demonising, Egalitarian approach (as, arguably, it should not).
It is encouraging that the public discussion is being welcomed, even initiated, by the engineers, since deliberation might alternatively have been seen as the traditional home turf of the Egalitarian.
Now read: Beware – Dangerous Robots!
The final word…
One thought on “The Ethics of Autonomous robots”