How do we know what we think we know? (part 2)

How do we know the tide won’t wash the beach away?

A couple of years ago a local newspaper reported a certain beach-front resident claiming  “It’s ridiculous to think this beach would ever get washed away by a king tide. I’ve lived here four months and it’s just never happened.” This is an example of an heuristic in operation. The particular heuristic the resident used was this: anything that hasn’t happened within the last four months will never happen. Clearly, it’s a deficient way of thinking (parts of the beach have in fact been washed away), but might there be heuristics that, though not infallible, are useful?

This post follows on from one a while back about how we know what we think we know about ‘how things really are.’ I’m seeking to develop a way of characterising grid-group cultural theory as a set of four ecologically efficient social learning heuristics.

Given that we don’t actually know how stable the beach is, or indeed anything much about how things really are:

We use heuristics…

When we don’t know how the world really is, we use heuristics.  This kind of thinking is what the beach-front resident was doing – and using the heuristic ‘beaches are pretty stable’ isn’t stupid, otherwise no one would go on beach holidays. Mostly it works. Sometimes it doesn’t. But in many cases it can work better than the alternatives. ‘Beaches are unstable’ is true for most  property developers, but generally untrue for sunbathers. Here’s an example of a relatively efficient heuristic: lost in a big city we could use an algorithm to infallibly find our way back to the hotel, such as ‘always turn left unless you’ve been that way before’. But it would be entirely impractical and take far too long. Instead we use a few heuristic rules such as ‘walk down the widest looking streets until you come to a landmark you recognise’ etc. All the time we take short cuts by using heuristics. It’s not perfect, but on the whole it’s quite efficient (that’s an heuristic about heuristics, by the way).

I am hypothesising that grid-group cultural theory typology offers a matrix that produces four alternative heuristic organisational frameworks, depending on our view of group membership (Group) and public classification systems (Grid).

Heuristics always produce partial knowledge

We don’t know what the world is ‘really like’, so any totalizing characterisation will be at best partial. If it was as simple as the mechanistic world that is the setting of a cellular automata problem called the density classification problem , for instance, we would fail to represent it correctly in (at the very least) some 12% of cases.  Even in simple mechanistic systems such as this, total knowledge appears to be impossible. The density classification problem takes place in an environment with just two possible states (on or off, one or zero), but Grid-group cultural theory suggests that in the real world, not only is there noise and complex topology – it also makes sense to think of four rather than two states (think of paired light bulbs: on-off, on-on, off-on and off-off).

This complicates the analogy with the density classification problem, but it may be hypothesised that some of the salient features of the binary configuration of the latter would still apply in a system with four possible states. In particular there would still be a per centage chance of knowledge failure. The other three cultural biases step into the breach when one cultural bias alone fails to account for observable events.

The efficiency of heuristics may (in some cases) be measured

It seems an heuristic approach to knowledge gathering may be more efficient than alternatives, even though it is still far from perfect. In particular, it appears simple heuristics are efficient in complex contexts. More research is to be found in Efficient system-wide coordination in noisy environments Andre A. Moreira, Abhishek Mathur, Daniel Diermeier, and Luis A. N. Amaral

“In the density-classification problem, each unit has to evolve toward the correct final state with only local information about the current configuration of the whole system. As discussed above, the sophisticated strategies devised to solve this problem rely on an organized structure of interactions, in which the units need to have a precise notion of the state of their neighbors, as well as the position of the neighbors in the network. Although these strategies perform well in idealized environments , in real-world decentralized systems the environment is rather more complex. It thus is reasonable to hypothesize that in real-world systems the units make their decisions by using simple heuristics that are robust against errors and do not depend on the precise structure of interactions. A plausible heuristic to reaching a consensus is to adopt the majority state of one’s neighbors.” p.12087f.

“Our results demonstrate that noise and topology critically affect the performance of strategies for achieving global coordination and information aggregation. Rules such as GKL, which perform well in noiseless, regular environments, are inefficient if complex topologies and noisy information transmission are present. In contrast, simple heuristics such as the majority rule investigated here are efficient in exactly such environments. Because real social and natural networks are characterized by noisy transmission and complex, small-world topologies, our findings provide an explanation for the observed use of such social-learning heuristics by animals and humans. Furthermore, our results suggest that decision rules cannot be evaluated in isolation from the environment of their system. That is, their success in coordinating behavior and aggregating information depends on both the rule and the typical environment in which it is used. In this sense, the majority rule is ‘‘ecologically efficient’’; it is well suited to interaction systems that resemble the real world. Put together, our findings hint at the possibility that networks and ecologically efficient rules coevolve over time.” p.12090

Now read the first part of this post: How do we know what we think we know? (Part 1)


Complex adaptive systems

Click to access lansing-complex-adaptive-systems.pdf

Majken Schultz 1995 On Studying Organizational Cultures: Diagnosis and Understanding. Published by Walter de Gruyter, 1995

On two dimensional cellular automata, see

One thought on “How do we know what we think we know? (part 2)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s