That was the Y2K that wasn’t


‘No planes fell from the sky, but a lot happened to keep them from doing so’.


This is a common view of the Y2K bug among software engineers and IT professionals in Anglo-American societies. For them it may be true that their hard work saved civilization from digitally-challenged-date Armageddon, but everywhere else in the world, they did next to nothing and yet, conspicuously, planes still didn’t fall from the sky.

So what was going on?

The story of Y2K bug is a marvellous example of how our subjective conceptions don’t just shape our view of reality, they shape objective reality itself.

Was the Y2K bug a serious threat or not? You’d think there’d be a straight and clear answer to this question, but it seems impossible to find one. The distinction between subjective and objective truth appears to dissolve before our eyes and if it can do so in relation to a super-expensive, high-stakes, world-wide emergency like Y2K, where else can it similarly dissolve?

The outcome of the Y2K bug has been used as a vindication of the ‘precautionary principle’ but also as a critique of that principle and an argument in favour of the ‘fix on failure’ principle. Most of the positive reporting has focussed on the positive ‘unintended consequences’, the ‘surprising legacy’ of Y2K preparation (especially the structural development of the IT industry) rather than demonstrating that a disaster actually was averted.

Economist John Quiggin has been the single most cogent thinker on Y2K, especially since his measured scepticism predates the benefit of hindsight. Two of the points he makes are especially worth reflecting on: blame-allocation schemes generally produce bad policy; some form of institutionally-sanctioned scepticism is indispensable.

Below is a list of resources, placed in order of increasing depth of coverage/insight.


Newsweek’s list of most overblown fears

Article from Slate Magazine

US Senate Committee final report

Public Radio miniseries – the surprising legacy of Y2K

Phillimore, J and Davison, A (2002) A precautionary tale: Y2K and the politics of foresight. Futures, 34 (2). pp. 147-157.

John Quiggin paper

More Quiggin


For a fourcultures take on this kind of thing, see The Dam Bursts.




The Ethics of Autonomous robots

Further to a recent post about the ethics of autonomous robots, it seems military robots are not the only kind that can kill, allbeit by ‘mistake’. In Japan there are already robots that feed the elderly and baby-sitting robots in shopping centres. So who exactly should be held responsible when they go wrong? It’s an issue that has concerned Noel Sharkey of Sheffield University for a while (he and Ronald Arkin were interviewed for the radio recently), and now the Royal Academy of Engineering has weighed in with a discussion report.

Autonomous Systems: Social, Legal & Ethical Issues, commissioned by the Academy’s Engineering Ethics Working Group, is online at

It’s an interesting read, but it doesn’t begin to ask the kind of questions grid-group cultural theory might…. Continue reading “The Ethics of Autonomous robots”

Beware – Dangerous Robots!

Dan Kahan of the Cultural Cognition Project has been thinking about the possible ways of reacting to robots that kill. It’s a relatively new set of technologies, but what happens when AI merges with weaponry to produce robots that want to kill you? He thinks the arguments could go in several ways and I tend to agree.

The ethics of this is already being worked out, with the aim of making robots behave ‘more humanely than humans’. There is a summary.

The title of a key book on the subject points to the potential contradictions:

Governing Lethal Behavior in Autonomous Robots

Governance is great – as long as we’re the ones in charge

The context in which all this is happening is an Hierarchical one: the so called military-industrial complex. Hence the great significance of the term ‘Governing’. For Hierarchy, governing is exactly the correct response to ‘lethal behavior’ – and this applies to all lethal behaviour, not just that of robots, who in a sense are nothing special. The point is, in the Hierarchical worldview violence is warranted, provided it is clear who is doing the warranting. But lethal robots present something of a problem. What happens if they aren’t programmed to be ‘governed’? Continue reading “Beware – Dangerous Robots!”

In the crowd, does everyone really think they’re king?

Sign at Brixton AcademyThe idea is suggested in a Wall St Journal article about mass sporting events. Why do the Americans sit down when the British (historically) stand up? The answer: in the US a universal sense of  nobility and in the UK a tradition of wallowing in mud. Apparently.

The article is interesting because it points out the connection between cultural norms and the concept of risk. In the UK, the author observes, safety issues were used to enable a shift away from standing and towards seated-only stadia. But it would be interesting to see what clear evidence there is for this common claim that standing at sports events is more dangerous than the alternatives. The Bradford fire of 1985, for instance, (see below) wasn’t caused by people standing. The dangers there related more to terrible stand construction, to the locking of escape exits, and to the lack of any evacuation plan that could work. I’d tentatively suggest that this is an example of risk being used as the occassion for a particular cultural bias to argue its case. If you can successfully argue that the alternatives to your plan are ‘just too risky’, you’ve won the argument. Remember, though,  that what from one cultural perspective seems like a threat, from another angle is no threat at all. I think people actually like whatever sense of edginess there is to standing in a crowd of thousands. For many the risk is worth it.

When all that unites us is our fear

At New Statesman magazine, Hugh Aldersley-Williams quotes Mary Douglas and Aaron Wildavsky’s Risk and Culture,

“people select their awareness of certain dangers to conform with a specific way of life”. He worries that we may reach a state in which “all we have in common is our fears”.

Actually, it’s very unlikely we’ll reach a consensus on our fears. The question of risk is a vexed one. According to Ulrich Beck, modernity is the process by which progress is overtaken by its negative side effects, so that the side effects, especially pollution of all sorts, become the main event. This is the ‘risk society’ in which we are increasingly defined by our status vis a vis threats to life – we take ‘social risk positions’. In stark contrast, Frank Furedi sees this as shamefully defeatist. For Furedi human ingenuity is the flame that burns eternal and there is no threat that isn’t in the end a wonderful opportunity. He disparages Beck’s thesis as ‘the culture of fear’. So who is correct? My money is on something known as grid-group cultural theory (developed by Douglas, Wildavsky and others) which proposes there are four mutually antagonistic cultural perspectives which institutions and individuals in them can adopt. Beck speaks for ‘Egalitarianism’, Furedi for ‘Individualism’, but there are two others, “Fatalism’ and ‘Hierarchy’. All coalitions of risk (eg the idea that wearing seatbelts in cars has saved lives, see the work of John Adams) are no more than fairly unstable temporary agreements between two or more of these.