How do we know what we think we know? What the Density Classification Problem tells us

How can we know what the world is really like?

We often hear fairly frank opinions about how things ‘really’ are. We probably make these kinds of claims ourselves from time to time: ‘the fact is…’, ‘that’s just the way it is…’;  ‘you know what it’s like…’

But how do we know what we think we know? And what makes us so sure that our assumptions are right?

Since our perceptions don’t extend very far (not even as far as the next room) isn’t it rather extraordinary that we think we know quite a lot about how things are?

But as anthropologist Clifford Geertz wrote:

“What we call our data are really  our own constructions of other people’s constructions of what they and their compatriots are up to” Geertz 1973:9, quoted in Schultz 1995.

That's more like it, by Soapbeard
It’s simple: we just ask our neighbours…
Steven Stroglatz describes an intriguing experiment in his book, Sync (2003:250f.; from Watts, 1999, ch 7)
It’s called the ‘density classification problem for one dimensional binary automata’.
In plainer language it’s a thousand light bulbs in a ring, either on or off. Each light bulb only ‘sees’ what its three neighbours on either side are doing. In each time step each bulb can ‘decide’ to be on or off in the next round and by some rule to be determined the network must calculate whether most of the bulbs were initially on or off.

…And our neighbours tell us
The puzzle was ‘solved’ in the late 1970s with a rule that would get the computation correct 78% of the time (the GKL rule).
[Gacs, Kurdyumov and Levin, 1978] P. Gacs, G.L. Kurdyumov and L.A. Levin. Problemy Peredachi. Informatsii, 14:92-98, 1978.
But in 1996 it was shown that the task as formulated could not be solved perfectly.
[Land and Belew, 1995] M.W.S. Land and R.K. Belew. “No two-state CA for density classification exists”. Physical Review Letters, 74(25):5148-51, 1995.
So the task was now to find the least imperfect rule.
Before small world theory came along, the best rule could find the correct answer 82% of the time.
[Juillé and Pollack, 1998] H. Juillé and J.B. Pollack. “Coevolving the ‘ideal’ trainer: Application to the discovery of cellular automata rules.” In: J.R. Koza, W. Banzhaf, K. Chellapilla, M. Dorigo, D.B. Fogel, M.H. Garzon, D.E. Goldberg, H. Iba and R.L. Riolo (eds.). Genetic Programming 1998: Proceedings of the Third Annual Conference, San Francisco, CA: Morgan Kaufmann, 1998.]
However, by introducing a few random shortcuts to the network (turning it into a ‘small world network’) Watts and Strogatz made a prevously failing strategy (majority rule) work successfully 88% of the time.
But our neighbours can get it wrong
As the story is told, it reads like a great success story for small world theory. But that’s not the most interesting bit. Try looking at it the other way round: think not of the successes (88%) but the failures. What the experiment is saying is that in a very simple model of a well-connected network, with a very simple empirical calculation to perform (were most bulbs on or off at the start?) the very best strategy fails to find the correct answer between 12 and 18% of the time.
What has this to do with the Four Cultures?
The last words of Michael Thompson’s latest book, Organising and Disorganising, are these:
Artificial social life – playing around with dim-witted creatures and a few simple rules that even they are capable of following and getting them to generate the rich, complicated and highly intelligent behavours that we recognise as ours – promises a whole new dawn for social science: a new dawn in which the rising sun is the relationality of the individual.” (Thompson 2008: 147)
I think the density classification problem is an example of how this kind of research program may proceed, because it tells us something about the value of ‘social learning heuristics’ and about situations where totally correct information isn’t available. Exactly what it might say, I’ll deal with in the next post.
To be continued…

4 thoughts on “How do we know what we think we know? What the Density Classification Problem tells us

  1. Strogatz doesn’t specify which types of rule are acceptable, and so the question is ill-defined.

    For instance, suppose each bulb is allowed to keep two integer variables, which count how many bulbs it has seen which are on, and how many are off, and the ring size R is known to the bulb. If the sum of the two variables is at least R, the bulb chooses to be on or off according to which of these numbers is larger. If not, then the bulb looks at its 3 neighbors to the right, adds that information to the variables, and then copies the value of its rightmost neighbor.

    This will solve the problem in ceil(R/3) steps, much smaller than the limit of 2R steps that Strogatz (and others?) allow.

    1. Thanks for reading, Christopher. That’s an interesting approach. What attracted me to this subject was the claim of Land and Bellew (1995) that under specific conditions, “No perfect two-state cellular automata for density classification exists”. For me this can act as a metaphor for incomplete knowledge in a decision-making situation. This is of course similar to its alternative name, the ‘majority problem’.

Leave a Reply to How do we know what we think we know? (part 2) « Fourcultures Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s