Change is afoot at the Bank of England. At the end of last year it became one of the first central banks in the world to announce plans to scour the web for data to better understand the state of the economy, and improve decision-making.
It has set up a special taskforce that will use advanced analytics to explore 'unconventional data', which includes online information. Interestingly, the Bank revealed this new approach has already had a consequential impact: its decision last year to impose new constraints on the housing market was prompted in part by big data on mortgage borrowing across the UK.
For the past few years the financial world has been slowly waking up to the power of online data. Many traders and investors already harvest online information from Twitter, search engines, forums and the like, in the quest to spot market-moving trends before rivals, and transform that insight into profit. But the Bank of England's decision to get on board shows how this innovation is quickly becoming the mainstream.
And it should, because it makes sense. Traditional economic analysis is based largely on the efforts of statistical organisations like the ONS, which follow pre-internet methods of information gathering. These scientific, comprehensive analyses will continue to form the backbone of economic understanding, but they suffer from a substantial time lag. In an ever-faster moving world, where much of our commerce and communication now takes place online, ignoring the new and vast realm of data the internet affords us is no longer an option.
So what sort of data are we talking about? The variety is immense, but falls into three broad categories:
• Mass behaviour: the web also acts as a record of behaviour - what we search for, what we purchase, and so on. Aggregating the behaviour of millions can give a insight into trends. For instance, thousands of people suddenly complaining about getting laid off on Twitter might say something about employment well before the official statistics catch up
• Expert insight and influencers: whether a renowned economist sharing their predictions for the latest bout of QE, or a prominent tech blogger praising the latest device, the internet is awash with the great and good making their thoughts known
• Hard data: certain sites will, by their nature, collect hard data that can provide insight into relevant issues. The number and type of listings on Zoopla and Monster, for example, could tell us something about the housing and employment markets respectively
But the web is a big and confusing place. Every minute of every day sees 100,000 new tweets, 2 million Google searches, and 347 new WordPress blogs, while consumers spend around $272,070 on online purchases. Separating useful insights from the noise is a challenge quite unlike any involved in traditional data analysis, and there are no easy rules. It requires both a different mindset, and the right tools. Here are some of the key considerations:
• Who is producing the data? There is no quality control on the web. Information comes from all sorts, with varying motivations, styles, and levels of expertise. Parsing this mix of data requires a level of lateral thinking utterly foreign to most analysts
• What is the nature of the data? Website traffic is a hard fact, whereas a Twitter post could be an unhinged opinion or joke. Connecting the dots between the different sorts of information in a useful way is a tall order
• What works for one channel mightn't work for another. For instance, when looking at employment trends, a relevant Google search might be something like 'How do I apply for job seeker's allowance?', whereas on Twitter it would more likely be something like 'just been sacked @JobSeekers #disgrace'
• Data analysts will be familiar with the problem of false positives, but analysing online data takes this to a whole new level. To use a simple example, #NewHouse trending could tell you something about the housing market, but it could also mean the popular TV series has returned to screens
Cutting through the noise to spot a genuine market signal is far from easy. It requires a painstaking 'cleaning' process. Sophisticated technology can filter the sheer weight of the data in a sensible way, but even then you still need good judgment and expertise to make sense of the result.
It is worth rising to the challenge, as getting it right affords huge benefits. Data that is only available quarterly, half-yearly, or even annually can now be supplemented with a picture - crucial for a world as fast-paced as ours. This gives the Bank of England longer lead times to assess market conditions, and more time to evaluate the appropriate response. The data could allow it to better spot early signals of potential bubbles or crises, allowing for subtler, slower interventions - rather than drastic firefighting in response to a sudden shock. As well as a quicker picture, online data can also present a more complete and - if analysed properly - accurate one.
As more and more of our everyday activities move online in some form or another, those organisations that fail to tap into the web will be left with an ever-growing black hole in their ability to anticipate and react to markets. That central banks are now getting in on the action goes to show that this soon won't be optional. Sooner or later all financial firms will need to take the plunge: the question is how prepared they'll be.