The working week kicked off with people feeling more down than usual. AWS was down, Skype was down, so too were the Nikkei, Hang Seng and ASX200. The mighty Apple was down too, after its Chinese App Store was infected with dodgy code. Just six games into the season, Newcastle and Sunderland looked set to go down too.
Polemic predictions that robots will steal all the jobs made by BBC's Panorama the previous week seem to have come true. Computers are so human now that they can't keep it together on a Monday.
By Wednesday it was Volkswagen's share price that was down; by around 30 per cent. Cars are increasing loaded with software, and are therefore becoming attractive to hackers. The central computer on many of VW's diesel car models, it's been widely reported, was programmed to detect test conditions and then maximise the performance of its fancy-pants low pollution cleverness; whizzy stuff that compromises driving performance, so best not to have it operating when the car is actually being driven...
So what we've learned is that the human-designed technology that tests emissions is terribly easy to evade. We also know that employees of reputable companies (it's probably safe to assume VW isn't on its own in this) have been deliberately using technology to hoodwink those environmental protection tests; and conning honest Joes hoping to make a responsible car purchase. When human-designed and controlled technology is so inept, and misused, one looks forward to artificial intelligence taking over.
The incompetence, and dishonesty, revealed in an area as mundane as exhaust emissions is a frightening precedent for driverless cars. I couldn't be more excited about the prospect of driverless cars. The utterly unproductive monotony of sitting in a car would be transformed. There would be far more quality family time on the way to Grandma's. Who cares about static traffic when homework is getting completed, or everyone is settled down watching a movie?
However if software engineers are spending hours on end gaming emissions tests, what sort of a moral cul-de-sac are they going to find themselves in when programming how a driverless car avoids an accident? Let's imagine a tree falls across a road, right in front of a car. The car cannot stop in time; it has a choice of driving straight into the tree, flipping over and probably killing its passengers. Or it can turn a sharp left, clip the tree, mount the pavement and hit a mother pushing a pram. Or veer right, clip the tree, and mow down an elderly gentleman who was about to cross the road. Do we feel that choice can be made responsibly by people who code deliberately fraudulent software?
Thursday saw Facebook go down. Even though the world's second biggest website was down, somehow we managed to battle through the day without pictures of other people's dinner. Meanwhile, back in the real world, where things actually matter, the White House said more than 5.6 million fingerprint records were stolen from the Office of Personnel Management (OPM).
So technology has had a bad week. It hasn't worked properly, it's failed to protect the environment and let car purchasers be duped. The difficulty for technology is that it's still 'new' for mainstream culture. Technology related privacy and security issues are high on people's agenda; far more so than privacy and security issues in the real world. The moral panic that is always associated with anything 'new' magnifies technology related mistakes and snafus.
Mainstream media, and the general public, need to better understand technology and view it in the context of the physical world. For all the hysteria around online threats, imagine if smartphones, tablets and laptops were as dangerous as cigarettes or alcohol.