How do we know the tide won’t wash the beach away?
A couple of years ago a local newspaper reported a certain beach-front resident claiming “It’s ridiculous to think this beach would ever get washed away by a king tide. I’ve lived here four months and it’s just never happened.” This is an example of an heuristic in operation. The particular heuristic the resident used was this: anything that hasn’t happened within the last four months will never happen. Clearly, it’s a deficient way of thinking (parts of the beach have in fact been washed away), but might there be heuristics that, though not infallible, are useful?
This post follows on from one a while back about how we know what we think we know about ‘how things really are.’ I’m seeking to develop a way of characterising grid-group cultural theory as a set of four ecologically efficient social learning heuristics.
Given that we don’t actually know how stable the beach is, or indeed anything much about how things really are:
We use heuristics… Continue reading “How do we know what we think we know? (part 2)”
In reply to Matthew Taylor’s question over at his RSA blog:
“how can it be true both that there are some social environments which encourage particular attitudes and behaviours (which could be said broadly to fit an egalitarian outlook) while, at the same time, in relation to any specific problem or decision, a set of conflicting responses (of which egalitarianism is only one) will emerge?”
1) Scale is crucial. Just as there isn’t a single rationality but four, neither is there a single scale. At one scale of operation, one of the four cultures may be dominant, and may seem to be a good fit with the landscape, but at other scales other cultural biases may be a better fit. See the work of ecologist Buzz Holling on this.
2) Similarly, time is also crucial. The social-ecological model of Holling and others in the Resilience Alliance suggests that ecological succession has a social counterpart. What appears optimal at one moment will become less optimal as time changes the environment, so that alternative problems arise, leading to alternative solutions and alternative institutions.
3) The ability to defect is also crucial. I have been quite taken with a cellular automata problem called the density classification problem. In short this seems to suggest that even in simple mechanistic systems, total knowledge is impossible. This means there is always room for the dominant answers to be wrong and for defectors from the main view to get it more nearly correct. Given that a) social-ecological systems are far more complex than cellular automata and b) evolution has fine-tuned human responses to problem solving, it seems possible that human society is an environment which rewards a dominant viewpoint without punishing too severely a minority of dissidents.
How can we know what the world is really like?
We often hear fairly frank opinions about how things ‘really’ are. We probably make these kinds of claims ourselves from time to time: ‘the fact is…’, ‘that’s just the way it is…’; ‘you know what it’s like…’
But how do we know what we think we know? And what makes us so sure that our assumptions are right?
Continue reading “How do we know what we think we know? What the Density Classification Problem tells us”