Living Dangerously in the Age of Data

At the moment, any company can get hacked at any time. So it is moot whether moral or legal responsibility to protect data ultimately lies with a business. Once your data has been stolen, there's little recourse - it's going to be very inconvenient for you.
|

Last month, I wrote about why it was dangerous that big companies were being given the opportunity concentrate a huge amount of data on individuals. In the intervening weeks, notorious adultery website Ashley Madison was hacked and customer data were published on the 'dark web'. There are a few factors that set this breach apart from other hacks and a lot of lessons both businesses and individuals can learn about data protection.

The actual nature of the Ashley Madison incident is worth pause for thought. On the face of it the perpetrators may have been motivated by their own high morals. Even if we ignore the inherent contradiction in seeking to expose what you perceive to be the moral failings of others by committing a serious crime, it's unlikely that ethics was the sole motivator of this hack. The hacking community was sceptical with many arguing that selling the data to fraudsters was probably the real rationale. If reports of blackmail of users are true, then it would remove all pretence of moral authority. Nevertheless, it draws attention to a relatively new form of hacking - the 'white hat' or 'ethical' hacker - someone who breaches systems to highlight a flaw, uncover malpractice or lax security standards. The growing ranks of the white hats presents a new type of danger to businesses and individuals.

Whereas black hat (criminal) hackers are fairly predictable in their attacks - it doesn't take a rocket scientist (or computer scientist for that matter) to deduce that they will be after financial information - white hat attacks and their consequences are more difficult to anticipate. In the case of Ashley Madison, a business that trades on anonymity, the email addresses of customers were the most dangerous piece of data that could have been stolen. Publishing the data online for all to see was an unthinkable consequence.

If we now live in a world where any business is a target and any type of data could be of interest, how should we protect ourselves? It's not practical to become a digital hermit and simply avoid putting information online. Avoiding using certain businesses because of the risk of 'exposure' is also a questionable tactic because it comes from a place of victim blaming and lets the hackers dictate how you should behave.

The first area to ponder is who should bear ultimate responsibility for protecting personal data. If you pass your information over to a company, are you accepting the risk that it could be stolen or is any breach down to the negligence of the business involved and thus entirely its fault? No security system is entirely fool proof, so it is fair to assume if you pass personal information over to a third party you should be aware of the risk. However, not every company adheres to the highest possible security standards, which can make your data easy prey.

If you accept that the internet is a dangerous place for your data it makes sense to mitigate this risk by taking more responsibility. First, if you consider your personal data to be a commodity, it stands to reason that you should spread the risk. Trusting one company with a huge tranche of your personal information seems pretty reckless: if they get hacked, the hackers can get everything, leaving you with a big headache.

Second, only trusting reputable companies with your data sounds reasonable, but it is a little harder in practice. After all, what constitutes a 'reputable' company? Even the most established businesses with spotless corporate reputations can be breached and indeed, might offer more attractive targets because of their reputation. There's no easy answer, but if a company has a long track record of playing fast and loose with security, it is probably best avoided. So too are sites that are unlikely to have the resources to create adequate security provisions.

Third, limiting the data you provide to only what is required is a sound approach. Think of your data as money, by providing it to a company you should get something in return. If the business just wants your data for its own sake and it provides you with no benefit, that's a bad deal. Always be careful to read the terms and conditions on what information you consent to providing a company and what they can do with it.

At the moment, any company can get hacked at any time. So it is moot whether moral or legal responsibility to protect data ultimately lies with a business. Once your data has been stolen, there's little recourse - it's going to be very inconvenient for you. Until better protections are in place, the only course of action for individuals is to take as much responsibility for their own data as possible and beware the risks.

Mike Weston is CEO of data science consultancy Profusion.