In the race to make chatbots more user-friendly and human-like, have their creators gone too far? Can a chatbot be too good? Too human-like?
It turns out, yes.
In a 2021 study conducted by researchers at the Oxford Future of Marketing Initiative (FOMI), and published in the Journal of Marketing as, “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions”, researchers found that chatbots that were TOO human-like negatively impacted customer interactions, satisfaction, and brand reputation when the customer entered the chat angry or frustrated.
The theory is that when a customer is upset, and they encounter a chatbot that has been anthropomorphised, that creates an expectation that the bot can empathize with and solve their problem just as well as a human could. When the bot is, in fact, unable to solve the problem, the customer becomes even more angry and leaves the chat dissatisfied, and disgruntled and is more likely to leave negative reviews, impacting brand reputation.
As the study states:
“Chatbots are increasingly replacing human customer-service agents on companies’ websites, social media pages, and messaging services. Designed to mimic humans, these bots often have human names (e.g., Amazon’s Alexa), humanlike appearances (e.g., avatars), and the capability to converse like humans. The assumption is that having human qualities makes chatbots more effective in customer service roles.”
Additionally, Professor Andrew Stephen, Director of FOMI, says,: “The emotional state of consumers when they shop is a factor that cannot be underestimated. People often shop to lift their mood after a bad day. If they encounter any issue in their customer journey, they expect the friendly customer service advisor on the other side to have the problem-solving skills to help them satisfy their needs. Chatbots, as much humanised as they can be, haven’t got that level of human sophistication, yet.”
The research showed that chatbots only had this negative effect on the customer experience when the customer was already upset. When the customer entered the chat in a neutral mood, there were no measurable negative outcomes.
While FOMI offers recommendations to mediate this issue, such as assessing the customer’s mood when entering a chat with a bot, setting realistic expectations (such as the bot offering a friendly reminder that they are “just a bot”), etc, another important tool is to have actual human customer service representatives available to take over when the bot can’t solve the issue. (In industry terms, this is referred to as escalation.) A seamless customer experience can look like:
- Chatbot is the first point of contact, with an assessment of the level of frustration the customer is at. If frustration or anger is present, escalate the customer to an outsourced, 24/7 customer service rep.
- Outsourced customer service rep offers the dynamic human element needed to de-escalate the customer’s anger/frustration. Solves issue for customer, OR is able to dynamically assess the need for in-house customer service involvement.
- In-house customer service is brought in only on issues that truly cannot be solved by a chatbot or outsourced human customer service rep, but when this IS needed, the customer is seamlessly taken care of through the process, resulting in the issue resolved and customer retention and satisfaction intact.
The moral of the story is that human connection is still important, and likely will always be, to a satisfactory customer experience. Employ cutting edge tools, be efficient and economical, but remember that your customers are human beings and want to be treated accordingly!
Research articles:
Chatbots Get Blame if They Look Too Human
Jennifer Williams is a Marketing Behaviorist at Verilliance.com, building lean marketing strategies based on consumer and decision science.