The Robot Will See You Now

The challenge is to get the balance right and leave the user feeling as though they have had a human experience. And, crucially, to avoid falling into the 'uncanny valley' through creating a bot that feels creepy by attempting to emulate humanity.

From Microsoft's AI chatbot, Tay, picking up the worst language we humans have to offer, to Tesla's self-driving cars claiming their first victim, it's fair to say that AI has captured the public's imagination and not always for the right reasons.

But this isn't the beginning scene from The Terminator 2, it is real life. As we begin to interact more regularly with AI the likely scenario is that we're left feeling somewhat dissatisfied, empty or even dehumanised. A conversation with a chatbot, for instance, that underwhelms, irritates or feels odd.

And yet chatbots are being held up as a transformative way of interacting with businesses and brands. For customers they promise a better, faster experience - less time spent waiting for an operator to become available or navigating through call centre menus; problems being resolved more quickly.

For brands the promise is greater operational efficiencies on top of a better user experience - if bots can handle basic queries about billing, payments or deliveries, call centre operators can be freed up to handle the more complex queries adding greater value to the business overall.

So how can brands build bots that leave people feeling more not less human as a result of the experience? How can brands build bots that live up to their promise?

These are the questions we've set out to answer through our recently published research collaboration with Goldsmiths, University of London and IBM Watson - Humanity in the Machine.

Firstly, brands need to focus on building trust. Through a series of biometric experiments measuring stress levels, we found that users are less forgiving of machines making mistakes than humans.

That means brands need to be conservative in their ambitions and in their early iterations make sure that their bot doesn't get much wrong in order to build up trust. That may mean asking more questions than is technically required to provide an answer, in order to build confidence in the results, as AI medical service does.

But intriguingly we found that consumers are often more trusting of bots around sensitive information than they are of human customer service operators.

25% say they are happier to give sensitive information to a chatbot. For 'embarrassing medical complaints', twice as many people prefer talking to a chatbot than a human than for 'standard medical complaints'.

People are prepared to trust chatbots, and so brands now need to make sure their development decisions build on this rather than undermine it.

Secondly, brands need to align the bot's tone of voice with their values without coming across as trying to be too 'chatty'.

Working with IBM Watson we set out to explore the tone of voice issue, by testing two alternative banking bots with very different personalities: one was chatty, informal and conversational; the other was more straightforward, with a serious and functional tone of voice.

Many found the chattier version unnecessarily off-putting, patronising or even weird. As one respondent put it 'The chatty one is like my dad when he uses emoticons, it's creepy.'

Brands need to give the bot a tone of voice which expresses their personality in a way that is flexible, contextual and personalised to different users and different situations. This will mean using copywriters alongside programmers to create consistent style and tone.

Finally, brands need to avoid making the bot 'human' an end in itself. What defines a 'human' AI does not depend on how human the AI appears to be, or how life-like its interactions are. What defines a human experience is the experience itself - it is measured by how a person feels when dealing with the AI, and not some intrinsic humanity in the technology.

A 'human' experience is defined by how the user feels, not how 'life-like' the bot is.

Bots should aim to use context and emotional understanding to deliver a 'human' experience by meeting the user need. In doing this the style of the bot should ideally go unnoticed.

If it feels too 'robotic', then interacting with it leaves the user feeling dehumanised. If it's too 'life-like' then the user can be left feeling patronised or even disturbed.

The challenge is to get the balance right and leave the user feeling as though they have had a human experience. And, crucially, to avoid falling into the 'uncanny valley' through creating a bot that feels creepy by attempting to emulate humanity.

As advances in AI continue to gather steam, there is no doubt that it will start playing a larger role in our everyday lives in one form or another. We all need to get used to it because AI will play a much larger role in our lives in one form or another. However, there's a large risk for companies. Get it right and it is business as usual. But, get it wrong, and customers will be quick to condemn the deployment of tech that doesn't meet the user requirements.

When AI is correctly used, it's a mutually beneficial situation where customers enjoy a better service experience and brands can cut their operational costs. However, brands need to ensure they are not nurturing the next generation of racist, sex-hungry bots. Instead efforts should be focused on creating an experience that is right for their brand and, more importantly, their customers.

Before You Go