Are Customers Lying to Your Chatbot

Article Author: Alain Cohn, assistant professor at the University of Michigan, School of Information.

 

Imagine you just placed an online order from Amazon.com. What’s to stop you from claiming that the delivery never arrived, and asking for a refund — even if it actually arrived as promised? Or say you just bought a new phone and immediately dropped it, cracking the screen. You submit a replacement request, and the automated system asks if the product arrived broken, or if the damage is your fault. What do you say?

 

Dishonesty is far from a new phenomenon. But as chatbots, online forms and other digital interfaces grow more and more common across a wide range of customer service applications, bending the truth to save a buck has become easier than ever. How can companies encourage their customers to be honest while still reaping the benefits of automated tools?

 

To explore this question, my co-authors and I conducted two simple experiments that allowed us to measure honest behavior in an unobtrusive way. First, a researcher asked participants to flip a coin 10 times and told them they’d get a cash prize depending on the results. We had some participants report their coin flip results to the researcher via video call or chat, while others reported their outcomes via an online form or voice assistant bot. They flipped the coins in private, so there was no way to know if any individual participant lied, but we were able to estimate the cheating rate for a group of participants (since overall, only 50% of the coin flips should be successful).

 

What did we find? On average, when participants reported to a human, they reported 54.5% successful coin flips, corresponding to an estimated cheating rate of 9%. In contrast, when they reported to a machine, they cheated 22% of the time. In other words, a bit of cheating is to be expected regardless — but our participants were more than twice as likely to cheat when talking to a digital system than when talking to a human. We also found that blatant cheating, which we defined as reporting an implausibly high success rate of nine or 10 successful coin flips, was more than three times more common when reporting to a machine than when reporting to a human.

 

Next, we determined through a follow-up survey that the main psychological mechanism driving this effect was participants’ level of concern for their personal reputations. We asked a series of questions designed to measure participants’ concern about how the researcher viewed them, and we found that those who had reported their coin flips to a machine felt a lot less close to the researcher, and as a result were a lot less concerned about their personal reputations, than those who reported to the researcher. As such, we hypothesized that anthropomorphizing the digital reporting system (in our case, by giving it a human voice rather than a text-only interface) might make it feel more human, and thus make the participants more worried about maintaining their reputations and less likely to lie. However, we found that participants still cheated just as much, suggesting that if people know they are interacting with a machine, giving that machine human features is unlikely to make much of a difference.

 

To be sure, it’s possible that advances in convincingly humanlike AI systems could make this a more effective strategy in the future. But for now, it’s clear that digital tools make cheating a lot more prevalent, and there’s no obvious quick fix.

The good news is, our second experiment did identify a strategy that can help companies address this issue: While there’s no eliminating dishonesty, it is possible to predict who is more likely to lie to a robot, and then push those users to use a human communication channel instead.

 

In this experiment, we first assessed participants’ general tendency to cheat by asking them to flip a coin 10 times and report the results via an online form, and then categorized them accordingly as “likely cheaters” and “likely truth-tellers.” In the next part of the experiment, we offered them the choice between reporting their coin flips to a human or via an online form. Overall, roughly half of the participants preferred a human and half preferred the online form — but when we took a closer look, we found that “likely cheaters” were significantly more likely to choose the online form, while “likely truth-tellers” preferred to report to a human. This suggests that people who are more likely to cheat try to avoid situations in which they have to do so to a person (rather than to a machine), presumably owing to a conscious or subconscious awareness that lying to a human would be more psychologically unpleasant.

 

Thus, if dishonest people tend to self-select into digital communication channels, this could offer an avenue to better detect and reduce fraud. Specifically, collecting data on whether customers are opting to use virtual rather than human communication channels could complement companies’ existing efforts to identify users who are more likely to cheat, enabling these organizations to focus their fraud detection resources more effectively. Of course, customers may figure out what companies are doing and try to game the system by choosing to speak with a real agent, thus avoiding being flagged as higher-risk — but this is really a win-win, since according to our research, they’ll be much more likely to behave honestly if they talk to a human.

 

Ultimately, there’s no cure for digital dishonesty. After all, lying to a robot just doesn’t feel as bad as lying to a real human’s face. People are wired to protect their reputations, and machines fundamentally don’t pose the same reputational threat as humans do. But with a better understanding of the psychology that makes people more or less likely to lie, organizations can build systems that can identify likely cases of cheating and, ideally, nudge people to be more honest.

Share this post:

Smart Technology, Better Business

Partners in your
digital E-volution