Dan Kahan of the Cultural Cognition Project has been thinking about the possible ways of reacting to robots that kill. It’s a relatively new set of technologies, but what happens when AI merges with weaponry to produce robots that want to kill you? He thinks the arguments could go in several ways and I tend to agree.
The ethics of this is already being worked out, with the aim of making robots behave ‘more humanely than humans’. There is a summary.
The title of a key book on the subject points to the potential contradictions:
Governing Lethal Behavior in Autonomous Robots
Governance is great – as long as we’re the ones in charge
The context in which all this is happening is an Hierarchical one: the so called military-industrial complex. Hence the great significance of the term ‘Governing’. For Hierarchy, governing is exactly the correct response to ‘lethal behavior’ – and this applies to all lethal behaviour, not just that of robots, who in a sense are nothing special. The point is, in the Hierarchical worldview violence is warranted, provided it is clear who is doing the warranting. But lethal robots present something of a problem. What happens if they aren’t programmed to be ‘governed’? In other words, the Hierarchist anxiety is that these machines must at least partially behave ‘autonomously’ and this autonomy may require new forms of management. The type of ethics considered in this framework is a Hierarchical ethics, which will focus on procedural ethics and make much of the concept of proportionality. Ends and means will be balanced. It will not consider that other ethical frameworks are coherent.
Just win the damn war…
This Hierarchical approach to lethal robots assumes a hierarchy of governance that puts the military command on the top – they govern – and the robots on the bottom – they are governed. But think about the situation form the perspective of a hypothetical enemy for a moment. From this perspective, the Hierarchical model breaks down. For the enemy, the lethal robots by definition can’t be governed, and short of surrender it is not acceptable for them to do the governing. Instead, the situation is one characterised by Individualism. The robots effectively are autonomous, unmanageable, and the appropriate response is one of competition. The Individualist book that waits to be written is called Defeating Lethal Behavior in Autonomous Robots, and it will focus on identifying the competitive advantages humans retain or can develop on the battlefield. The Individualist ethics considered in such an approach will be substantive – that is to say they will focus on outcomes. If the evil of lethal robots is to be overcome, then the means will tend to be justified by the ends. Stood on its head again, such an approach can be taken by the owners of the robots to argue, hawkishly, that there’s no real point in governing lethal robots since this amounts to pulling their punches for them. The best lethal robot, from this Individualist perspective, is an autonomous one. And what happens if, as the Hierarchist worry, they go rogue on us? For Individualists, this may well be a price worth paying. Individualists aren’t big on accountability. After all, on the battlefield you don’t manage, you win. (Incidentally, this view explains why political hawks clash with senior military managers and why the managers feel the hawks made them lose).
Ban them all!
But there is another approach to lethal robots and that is the Egalitarian approach. For Egalitarianism, both governance and competition are sub-optimal relationships. Rather, inclusion is the prefered method. The shadow side of this is that when inclusion fails, the only alternative is exclusion. This is because for Egalitarianism, group membership is all – you’re either in the group or you’re out of it. So how might this work out in relation to autonomous robots? The obvious approach is for Egalitarianism to propose a worldwide ban on lethal robots. The rationale would be that since they are so risky, no one should develop or prime them. In fact I’d be amazed if a proposal for an ‘autonomous robot non-proliferation treaty’ isn’t already being drafted somewhere. If this failed, an inclusionary strategy would be to ensure everyone has access to such technology. This would create a kind of Egalitarian standoff in which there was no point in using killer robots since they’d be ubiquitous. The Egalitarian ethics of lethal robot behaviour would be a normative ethics and would focus on the means at least as much as the ends. For this reason, much would be made of the sheer distastefulness of robots killing humans, irrespective of their supposed ‘efficiency’. The concept of humane robots would met with severe Egalitarian scepticism.
Keep your head down
Finally, the Fatalist approach to lethal behaviour in autonomous robots revolves around two themes. First, technological innovation is just one of the many forces trying to prevent us having a quiet life and it’s as unstoppable as a juggernaut. Just like the atom bomb and like gunpowder before it, once the killer robot genie’s out of the bottle there’s no putting it back in. Talk of managing, competing, including/excluding is all wishful thinking. Second, killer robots are too worrying to think about. If you’re lucky they’ll be on your side. If you’re unlucky they’ll be one more example of why today is undoubtedly a better day than tomorrow, so make the most of it while you can. In power, a Fatalist approach would be to ensure research funding into lethal autonomous robots is made randomly, rather than by competitive tender or any other method.
Cultural Theory
A concluding thought is that cultural theory may yet have a part to play in the programing of AI in the future if not already, since it claims to present, at the very least, a typology of useful heuristics for social behaviour. (See, for instance, Maner, W., “Heuristic Methods for Computer Ethics”, Metaphilosophy, Vol. 33, No.3, pp. 339-365, April
2002.). On the other hand, it may become a useful tool for critiquing the machinations of those who suppose that rationality – something aimed at by both Artificial and non-artificial Intelligence – is mono-polar.
Interesting post. As near as I can tell, the Governance perspective holds the greatest sway in academic security studies. There is no accepted norm or precedent for assigning responsibility with autonomous, lethal robots. As these robots appear on the battlefield, different militaries will inevitably develop different approaches to both governance and accountability. Proportionality may be very much in the eye of the beholder, witness the recent Gaza conflict.
Concerning Individualism: that, to me sounds like an amoral perspective on combat. While an amoral approach to war is possible, I am skeptical that major powers will openly discuss it. We tore our collective hair out at the ethics of decisions over torture and atomic weapons, and rightfully so, even when the commanders in charge were convinced of the validity of the decisions made. Abhorrent practices in war are always wrapped in normative frameworks that necessitate and justify brutality.
Fatalism is many decades off; the international community isn’t prepared to give up land mines, let alone as-yet-undeveloped robots.
Thanks for your illuminating comments Ben. I quite agree with you. ‘Governance’ is clearly the dominant military research approach, and my post was pointing out that there are also other possible ways of looking at the issue. The analysis here implies there may be a conflict between the Hierarchical defense department view (managing everything), and the Individualist view of some politicians (winning without accountability). Note that while we did ‘tear our collective hair out’ over a number of issues relating to the Gulf, including torture, certain highly influential politicians, Dick Cheney, for instance, emphatically did not. The Grid-Group Cultural Theory perspective would be that this wasn’t amoral within the framework of the Individualist worldview, but it probably would be immoral from the perspective of the other three cultural biases. We have competing moralities of warfare and military innovation that correspond to the four cultures of Grid-Group analysis. If we really want to make robots behave ‘more humanely than humans’ we need to take this into account. The point is that moral relativism is a feature of human social organisation, but it is very far from ‘anything goes’. This is a constrained pluralism: only four things go. Those who think it’s all about ‘governing’ ignore the other three approaches at their peril.
This is a great bblog