By Bobby Duffy, King's College London
We think one in four of the entire population is Muslim (5% in reality). We believe 31% are immigrants (the official figure is 13%). We have an extraordinary view of teenage girls, believing that on average 15% under the age of 16 get pregnant each year (0.6% in reality). We think £24 out every £100 of benefit spend is claimed fraudulently (it's actually 70p). We think crime is rising (it's been falling for years). And we're more likely to pick out foreign aid as a top item of expenditure than state pensions (we spend 9-10 times more on pensions).
The scale of our errors is startling - but this isn't particularly new, similar patterns have been seen in other surveys. So the more interesting questions are why these massive misperceptions arise and what, if anything, can we do about them.
Four reasons to be wrong
On why, I'd group the explanations into four. First, there are simple measurement and definitional problems. It's difficult to get across what can be quite complex and precise issues in simple survey questions.
But probably more importantly, the public are not always thinking about the things we think they are. For example, when we ask people what they were thinking of as benefit fraud when they guessed at its scale, they select items that can't be counted as actual fraud. In people's minds, it includes claimants not having paid tax in the past and people having children so they can claim more benefits.
Second, there are a whole range of cognitive errors, simple mistakes we make when answering these types of questions. This includes problems of statistical literacy - for example, we just struggle with very big or very small numbers, and find it hard to distinguish between rates and levels.
But there are also explanations from social psychology on the biases and shortcuts in how we think: for example, we know we're more likely to focus on and remember negative information.
Third, there is certainly an impact from the media and political discourse. The links are complex and difficult to prove categorically, but the association between attitudes and media coverage is often strong. Of course, the media also reflects our concerns and tastes for types of information: to a large extent we get the media we want. The focus on vivid stories rather than straight facts is because we pay more attention to those vivid stories ourselves (we admit we rely on personal experience and information from those around us more than representative data).
Which leads onto the fourth key explanation - that these misperceptions may be an effect of our concerns rather than a cause. That is, we overestimate partly because we are worried about these things, rather than being worried because we believe we know their full extent. Academics call this "emotional innumeracy": we're making a point about what's worrying us, whether we know it or not.
Getting it right
What we decide to do depends on which of these effects we think are mostly to blame. The boring, but probably accurate, answer is that it's likely to be a bit of each, so we need a range of responses. In particular, we need to avoid a convenient conclusion that because over-estimates are partially a reflection of our concerns, we shouldn't even try to correct them. That just leads to a vicious circular argument that perceptions are reality even when they are plainly wrong.
So we do need to improve statistical literacy - which is as much about improving our confidence to question both statistical assertion and anecdotes as improving simple maths skills. This needs to start in schools, with more use of real-life data rather than abstract problems. Given the disproportionate effect of the media and politicians, steps to improve their understanding of statistical stories should be a focus too. The Royal Statistical Society's getstats campaign targets each of these.
Alongside this, we need to continue to challenge the misuse of data by politicians and the media, through bodies like the UK Statistical Authority and FullFact. Of course, this will have a limited direct impact on public perceptions, given it is working against the weight and habits of the media and political rhetoric. But the aims of these bodies are at least as much preventative as corrective: the more those using statistics badly are pulled up, the less likely they will think the risk is worthwhile.
Even so, these steps will always struggle to get to a key part of the problem. There are many instances where the information provided by politicians or the media may be perfectly factually accurate, but vanishingly rare. The vivid anecdote is the only thing people remember. This is part of the reason why "myth-busting" exercises alone are likely to have very limited impact on perceptions.
So just as important as providing a correct picture of scale is providing a narrative that appeals to people, with its own role models and vivid stories.
The power of facts
We also shouldn't entirely give up on changing people's minds with facts. We regularly run deliberative workshops on tricky policy issues where information is provided, experts give evidence and people have time to reflect on things they don't normally get the chance to. And views do often shift. This serves a useful purpose in its own right, if it means policies are based on what a more informed public thinks.
But it's obviously not very practical to get the whole population in a workshop for a day or so. However, new communication technology does provide easier ways to do this. It won't be as cathartic and will only reach a subset, but online mass deliberations by independent organisations (say, the BBC) could play its part in improving our currently badly informed debates.
Just don't expect a public epiphany any time soon - those many thousands of phantom pregnant teenagers will be with us for some time yet.