RBS IT Outage Turns Banking Nightmare Into a Reality

Managing the fallout following a public IT outage is of paramount importance and having a similar issue so soon after the 2012 fiasco will only work to further tarnish RBS's reputation. However, it's now vital the bank learns from its mistake to make sure customers are not forced to deal with similar issues further down the line.
|

Last week saw one of 2015's highest profile IT outages play out in front of the masses, with RBS on the receiving end of a glitch which saw 600,000 online payments fail. Unsurprisingly, there was uproar, especially as RBS has precedence for this type of outage. Just seven months ago, the banking goliath was fined £56m for a 2012 IT meltdown which left users locked out of their own accounts.

Although the issue now seems to have been resolved and a public apology has been made, it's unlikely customers impacted by two such outages in the space of three years will be willing to forgive and forget.

Damage limitation isn't enough

Managing the fallout following a public IT outage is of paramount importance and having a similar issue so soon after the 2012 fiasco will only work to further tarnish RBS's reputation. However, it's now vital the bank learns from its mistake to make sure customers are not forced to deal with similar issues further down the line. After all, fool us once, shame on you. Fool us twice, shame on us. Fool us a third time, we're moving banks.

We deserve to know we can access our bank accounts around the clock without the worry of being locked out or transactions simply failing. The repercussions can be disastrous - from the embarrassment of not being able to afford the weekly shop while at the checkout, to missing scheduled payments for phone contracts, holidays or even mortgages. Simply saying sorry isn't enough, banks need to make sure IT failures stop plaguing online banking.

Saving the banks

To ensure consumers are not left stranded in the dark ages, banks use reputable data centres to minimise the risks associated with hosting vast amounts of public data. However, when issues such as the RBS software failure occur, data centres can't prevent downtime - instead they act to ensure background tasks can still be completed while the glitches are resolved.

How does this work, I hear you ask. High quality data centres have systems in place to ensure back-up structures automatically kick in when an outage occurs so any services relying on this data don't suffer any downtime. With the vast range of connections and structures which combine within data centres, providers are continuously asking "what will happen if this element fails?" to ensure consumers can continue shopping online or setting up direct debits without interruption.

It's this responsibility which makes data centres the unsung hero of the technology world. So the next time you're managing your accounts online and ensuring your multiple direct debits transactions are completed on time, take a moment to think of the data centres behind it all.