How can we know what the world is really like?
We often hear fairly frank opinions about how things ‘really’ are. We probably make these kinds of claims ourselves from time to time: ‘the fact is…’, ‘that’s just the way it is…’; ‘you know what it’s like…’
But how do we know what we think we know? And what makes us so sure that our assumptions are right?
Since our perceptions don’t extend very far (not even as far as the next room) isn’t it rather extraordinary that we think we know quite a lot about how things are?
But as anthropologist Clifford Geertz wrote:
“What we call our data are really our own constructions of other people’s constructions of what they and their compatriots are up to” Geertz 1973:9, quoted in Schultz 1995.
It’s simple: we just ask our neighbours…
Steven Stroglatz describes an intriguing experiment in his book, Sync (2003:250f.; from Watts, 1999, ch 7)
It’s called the ‘density classification problem for one dimensional binary automata’.
In plainer language it’s a thousand light bulbs in a ring, either on or off. Each light bulb only ‘sees’ what its three neighbours on either side are doing. In each time step each bulb can ‘decide’ to be on or off in the next round and by some rule to be determined the network must calculate whether most of the bulbs were initially on or off.
…And our neighbours tell us
The puzzle was ‘solved’ in the late 1970s with a rule that would get the computation correct 78% of the time (the GKL rule).
[Gacs, Kurdyumov and Levin, 1978] P. Gacs, G.L. Kurdyumov and L.A. Levin. Problemy Peredachi. Informatsii, 14:92-98, 1978.
But in 1996 it was shown that the task as formulated could not be solved perfectly.
[Land and Belew, 1995] M.W.S. Land and R.K. Belew. “No two-state CA for density classification exists”. Physical Review Letters, 74(25):5148-51, 1995.
So the task was now to find the least imperfect rule.
Before small world theory came along, the best rule could find the correct answer 82% of the time.
[Juillé and Pollack, 1998] H. Juillé and J.B. Pollack. “Coevolving the ‘ideal’ trainer: Application to the discovery of cellular automata rules.” In: J.R. Koza, W. Banzhaf, K. Chellapilla, M. Dorigo, D.B. Fogel, M.H. Garzon, D.E. Goldberg, H. Iba and R.L. Riolo (eds.). Genetic Programming 1998: Proceedings of the Third Annual Conference, San Francisco, CA: Morgan Kaufmann, 1998.]
However, by introducing a few random shortcuts to the network (turning it into a ‘small world network’) Watts and Strogatz made a prevously failing strategy (majority rule) work successfully 88% of the time.
But our neighbours can get it wrong
As the story is told, it reads like a great success story for small world theory. But that’s not the most interesting bit. Try looking at it the other way round: think not of the successes (88%) but the failures. What the experiment is saying is that in a very simple model of a well-connected network, with a very simple empirical calculation to perform (were most bulbs on or off at the start?) the very best strategy fails to find the correct answer between 12 and 18% of the time.
What has this to do with the Four Cultures?
“Artificial social life – playing around with dim-witted creatures and a few simple rules that even they are capable of following and getting them to generate the rich, complicated and highly intelligent behavours that we recognise as ours – promises a whole new dawn for social science: a new dawn in which the rising sun is the relationality of the individual.” (Thompson 2008: 147)
I think the density classification problem is an example of how this kind of research program may proceed, because it tells us something about the value of ‘social learning heuristics’ and about situations where totally correct information isn’t available. Exactly what it might say, I’ll deal with in the next post.
To be continued…