According to the folk tale, a frog will remain in a pot of water, brought to a boil-as long as the water is heated slowly. While probably false, it provides an apt metaphor for adapting to phenomena which change incrementally over time. If we see the changes related to Artificial Intelligence as such a phenomenon, then we humans are not likely to leap from the pot like the frog. In fact, we're more likely to call out, "Turn up the heat!"
Let's take a quick step back to frame things. There are three stages of A.I.:
1. Narrow A.I.: the ability of machines (computers) to execute specific tasks better than humans. Examples: tic tac toe, Chess. If there were a map of "A.I. land" there would be a marker on "Stage 1, Narrow A.I". with the words: "You are here"
2. General A.I.: The ability of a machine to perform pretty much any cognitive task as well as a human can.
3. Superhuman A.I. the ability to outperform the best humans at any task in practically any field.
When people hear the words "Artificial Intelligence", they most quickly leap to a vision of the superhuman, the third stage of A.I. A few will imagine utopian scenarios like an (artificially) emotionally intelligent "HER" or a benevolent Wall-E. Many more will picture a dystopian world like The Terminator or The Matrix. And the question most people are concerned with is "how soon will the pot boil'"? At what point do our robot overlords enslave us, or perhaps worse, make us irrelevant?
While I'm not qualified to render an accurate opinion on the timeframe, I am intrigued by a slightly different question: If the rise of A.I. has such profound implications, and most people imagine nightmare scenarios, why do so few realize we are rushing towards it headlong, embracing and adopting every advancement immediately and clamouring for more?
The answer, it seems, is that A.I. is "invisible". Once a new piece of A.I. is introduced into our lives, we may have a brief "hmmm..." reaction or maybe even an "ah....cool!" moment or two, and then it quickly recedes into the background of our lives. We adopt it and adapt to it. It becomes part of the furniture and we're back to business as usual.
Take anti-lock brakes. The first time they kick in, the experience is a bit strange--a moment of surprise at the change in the behaviour of your car on the slippery surface. But a couple more times and you don't even notice--you relax feeling more secure that you have more control and better stopping distance than you had previously. Antilock Braking Systems are governed by an algorithm, and generally perform significantly better than humans in the same situation (who will typically mash harder on their non-antilock brake pedal, somehow hoping the force of their feet will confer additional friction to their tyres skidding across the pavement). In other words, ABS fit the textbook definition of narrow (level 1) A.I. above.
Do you remember spam? Anyone who used email ten years ago certainly does. Sure we all get the occasional unwanted email, but I can remember wading through literally hundreds of emails every single day to get to the handful of relevant ones. But perhaps a better question is, do you remember when it stopped being such an annoyance? I didn't. I had to search. The first articles about Spam disappearing started popping up five and a half years ago. It turns out that this was a relatively easy "narrow A.I." problem to solve. By comparison, it took until 2016 for an A.I. to beat the best human at the game of Go. But the point is that most people didn't really think that much about it. We barely noticed as the quality of our inbox got better and better.
We notice things when they annoy us, but when they stop bothering us, they cease to register. And this is the nature of the current state of the art of A.I.. Our awareness is directly proportional to the amount of friction still present in the experience. I remember when Google Maps wasn't all that good--and in some parts of the world it still struggles--presenting a wide blue circle for your position when you're trying to figure out exactly where you are and which way you're are facing. This can be tough, but this happens less and less. Map's directions have gotten better and better, taking into account your most frequent mode of transportation and current and historical traffic conditions. We already take the navigation systems based on these API's for granted, but their implications reach deep. They enabled services like Uber, Lyft, Taxify, etc. and while the impact of these services may not be 100% positive, they have caused a sea change in personal transportation. And now, the ability to seamlessly order point to point personal transportation in nearly every major city in the world (you don't even need to speak the language!) doesn't even seem remarkable.
Predictive A.I. is already pervasive, not just when you talk to your devices and they talk back. When Spotify recommends a track that you like, when you check tomorrow's weather, every time you search--especially with terms that are not quite right or maybe have a misspelling or two, Google still delivers what you're looking for. When Facebook, Twitter and Instagram, deliver content for you, it's an A.I.. At this point roughly half of all shares traded on the principal financial markets are algorithmic. When your plane lands, it's not a human that decides which gate it should go to. Just like it's not a human that found you the best the price for your ticket. Nor a human who decides which passenger will be denied a seat on an oversold plane--or as we saw recently, pulled off of that plane! And this is an absolutely fabulous example of the water starting to boil-- we cease to notice A.I., we adapt to it--reform our habits and actions around it. And only now and then, when it delivers the "best solution" to a problem as it was designed to do, we're so used to blindly heeding its dictums that we may not stop and think for ourselves--to deliver instead the human solution.
I'm actually a huge proponent of A.I. I've seen first-hand how machine learning can define A.I. algorithms which dramatically improve the speed of human learning. But I believe it is critical that we be conscious of what we give over to algorithmic control. A.I. will increasingly determine the nature of the reality in which we live and that in which we raise our future generations. In fact, a recent report from McKinsey published last month interviewed more than 3,000 senior executives on the use of A.I. technologies, their companies' prospects for further deployment, and A.I.'s impact on markets, governments, and individuals. The report found that tech giants including Baidu and Google spent between $20B to $30B on A.I. in 2016, with 90% of this spent on R&D and deployment, and 10% on A.I. acquisitions. In addition the report estimates that total annual external investment in A.I. was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Clearly the market is showing significant growth. However, if we are unable to distinguish what comes from A.I. and what does not, we have no capacity to understand the trade-offs and chose for ourselves. Meanwhile we can sit in our perfectly warmed homes (thanks to Nest) and relax in our baths, heated at just the right time to just the right temperature by our smart water heaters, and wonder just what that frog was so fussed about.
About Scott Dodson, CGO, Lingvist
Scott is a mentor for top incubators including Microsoft Ventures & Emerge Education (London), and Techstars (Seattle, London, Berlin, NYC). He has held C-level positions for over two decades in Games, Fintech and Edtech and launched successful products worldwide on over a dozen platforms including The Spoils, Virgin Poker, Tropicana Casino, and Lingvist's iOS & Android Apps.
Formerly a Professor of Game Design at Digipen Institute of Technology, he is an avid student of human motivation and the drivers of sustained engagement. His current passion is applied A.I. and machine learning. Since joining in August 2015, Lingvist has grown from 80K to 800K users, learning over a million words per day.