Amelia is a pleasant, bright, professional. She is personable, learns quickly, and can speak 20 languages.
She dresses like a accountant, is patient and clear, and is cheap to hire. And she can learn anything - anything - to virtually an expert level in less than a minute.
Amelia is an artificial intelligence. She exists, in commercial form, today. And she wants to take over about a quarter of all jobs within two decades.
Needless to say, when I met Amelia earlier this week I wish I'd worn a tie.
How is Amelia going to do this? By out-thinking us. At scale.
Amelia is a new type of artificial intelligence, one that IPsoft claims can absorb, deconstruct and use information like a human being.
You might think Google, Siri or Cortana are already pretty smart. But the truth is that while most search engines, 'chatbots', and IBM's TV quiz-beating Watson can appear to be thinking, the reality is they are doing something far more simple: database lookup. Really they are just looking for keywords, matching up relevant documents and phrases, maybe calculating something simple (a currency conversion, for instance) on top and spitting back information.
Algorithmically the results are likely to be extremely relevant to your query, but Google does not on a fundamental level 'understand' what you, or it said.
IPsoft claims that Amelia does. That's because Amelia does not just understand the syntax of what you ask it, but also the semantic meaning of what you said. It is able to break down a human sentence into parts, check those against a broader range of definitions and contextual use (a "neural ontology"), and then recombine the meaning of those parts in its own words, applying logic and thought in the process.
The practical result, which IPsoft demonstrated to me from a blank version of Amelia in real time, is pretty astonishing - and it's close, in appearance and function, to a functioning intelligence.
Based on our demo, with Amelia you can upload the entire text of a complex technical manual - in this demo it was an instructional text which had something to do with oil rig maintanence - and Amelia will digest the entire thing in seconds. Then when you type in a question (in one of 20 languages) it will be able to send you a genuinely useful answer, without having access to any other network, manual or information store.
It can go from knowing nothing to being able to teach you to fix a faulty oil rig in 30 seconds.
Yes, it was a controlled demo. Yes, it had a couple of slight flaws, including one terrifyingly worded 'FATAL ERROR' which turned out to just be a Java issue (my panic was swift, though brief). But it worked. It's also commercially ready. IPsoft unveiled Amelia last week not as a concept, or a functioning demo, but as a real product ready for deployment around the world. Now.
"We want to make sure that human beings can dedicate their time to more valuable tasks. Taking out the more repetitive tasks is I think a noble aspiration for a company," said Frank Lansink, EU CEO of IPsoft, at a briefing in the firm's HQ at 30 St Mary Axe (the Gherkin).
"Our purpose is to elevate human beings into a more meaningful role, adding value to society, or to enterprise, or the customer."
What's more, Amelia can also learn alongside human partners. Take customer support chat bots. These already exist, but once they 'break' (or you go off-message) they require immediate replacement by a human supervisor, and don't learn from their mistake.
Amelia is different. In a scenario where a customer has a problem she can't fix, she'll listen in as a human helper takes over, then deconstruct how the problem was solved and do it herself the next time around. She'll never need to learn this again, and will gradually build up a knowledge base conceivably equal in extent and usefulness to that of any human employee. This, from IPsoft's pitch materials, explains the concept:
"Technology shouldn't only be able to be leveraged by technologists," said Parit Patel, Senior Solutions at IPsoft.
"You shouldn't need to lean how to program to train Amelia or use Amelia. You can speak to her in the same way you can speak to a new starter in your office."
And yes, that's the rub. IPsoft is quite blunt: they want Amelia to save their customers money, which ultimately means replacing human jobs.
IPsoft sees Amelia supplementing, or directly replacing, virtually all 'non-expert, repetitive' job functions from customer support to expert assistance and back office roles, as it has done in trials at various Fortune 1000 companies.
"It's going to change the way people work in the future. You can't ignore that. I think it's important to have that dialogue," said Lansink.
"We see Amelia as a loyal servant, supporting human beings to be able to improve their daily lives and the way they work."
Which is just the start, of course. As this type of thinking tech gets better, more jobs and tasks can fall under its remit. And futurists agree, this transition to automated jobs is happening whether we want it to or not. According to academics at the Oxford Martin School 47% of all employment is at risk from automation within a few decades. We are approaching an employment armageddon; a ground-up remaking (or breaking) of the economy.
The supposition among many futurists and technologists is that this tech will create as many jobs as it kills - as has happened at various stages of humanity's economic evolution. But there's no guarantees - which IPsoft acknowledge.
Lansink said: "Technology has always been the ultimate liberator… it's not a guarantee for the future, but I think this will be another shift and transformation of people working in a different context."
IPsoft insist that their new tool will be a help to human workers. AI like Amelia will be like log tables or farming machines, they say - freeing humanity up from repetitive tasks and allowing us to think bigger, better and more lucrative ideas. But it's still something governments may have to address.
"I don't think that governments realise that this type of technology is here and that we might need to rethink the way we work. That is something that we need a dialogue on," says Lansink.
"Our biggest objective is that in all the research and development we've done is to build a cognitive engine in which you as a human being would not know if you were talking to a computer. It should be at the same comfort level as if you were talking to a human being."
What Amelia isn't is Skynet. ("In reality it's less Terminator and more Starship Enterprise," said Patel). And it's not simulating a human brain, either - it merely immitates the way that the human brain appears, to software developers, to learn.
"We've done this in the way that the Wright brothers approached flight," said Patel. "They didn't go out there and build a flapping aeroplane, they looked at the principles of the wing and they made an aeroplane. We're not trying to replicate the brain --
"-- the outcome is the same, but the way to get there is different" said Lansink.
"We're not going to put Amelia in front of a TV and say 'learn the world', we'll put her in a situation where she has a function to perform, and she can learn what to do."
There are unproven aspects to technologies like Amelia. Among them is the extent to which humans are willing to work alongside AI, and how robust AI proves to be in the real world. Physical robots - not IPsoft's area, but a related one - are also a continued disappointment in practice.
Other companies are interested in building their own learning AI too. A potential market of $5-7 trillion is quite a prize, and many other companies are pitching their own tech as the breakthrough link between human and digital cognition. Google alone paid £242 million for the British-developed AI DeepMind earlier this year, and everyone from IBM to Facebook has solutions in development to integrate AI into human work environments. IPsoft thinks it has a technological head-start, but how it develops and is adopted in the market is another question.
And yes, there is a bigger question about the darker side of AI - not specifically relating to Amelia, but to the precedent it might help to set.
Philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, said in a recent interview that while current AIs pose no danger to humans, there is obvious potential for that to be a problem in the future.
"Consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way," he said. "It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips."
"One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world."
Watching Amelia work, and listening to IPsoft's confident pitch, it's hard not to think that we are on the verge of something new. Something which - while not necessarily good for everyone who likes their current job and not working for a robot overlord - will take humanity forward, in one way or another.
Don't be surprised if you have a robot coworker hanging around your desktop in a chatbox sometime soon. In one scenario that might make your life easier, less repetitive and less stressful. In another it might see one of Amelia's descendents sitting behind a giant steel desk handing you a pink slip.
Suggested For You
SUBSCRIBE AND FOLLOW
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more