01-07: TaeWoo Kim
Chair: Bo Edvardsson
Cheating on AI
Building on the burgeoning literature of consumer dishonesty, the current research examines whether consumers’ dishonest behaviors amplify when interacting with non-human artificial agents (e.g., AI or robots). Prior research has shown that unethical behavior is suppressed by personal or situational inhibitory factors such as guilt feelings (e.g., Ekman, 1985). That is, consumers forgo cheating opportunities for incentives when the action carries more anticipatory guilt. Based on prior work that artificial (vs. human) agents are considered to lack the ability to feel (Waytz and Norton 2004), we hypothesize that consumers would perceive cheating on non-human agents less harmful and feel less guilty, leading to greater engagement in unethical behavior. The hypothesis was supported by three experiments.
In Study 1, participants (N = 138 undergraduate) were told that they would get into a lottery to win $100 in case the number they chose matches with the number that an agent would randomly choose. The result shows that individuals cheat more when an AI Alexa (vs. human confederate) was introduced as the officiating agent. This study shows that people tend to behave more dishonestly when they are interacting with an AI (vs. human).
In extension of this finding into a marketing context, participants in Study 2 (N = 83 undergraduate) were provided with a hypotheticals scenario in which they were selling a used car to a car dealer on the internet. For a car that has true mileage of 80,000 miles but showing an incorrect odometer reading of 60,000 miles, they were asked to indicate the mileage that they will inform to a potential buyer of the car. The result shows that individuals reported lower mileage when they believed that they are reporting to an AI (vs. human) car dealer.
In Study 3, participants (N = 234 undergraduate) were instructed to imagine that they decided to return a product they bought online because they changed their mind and they will have to pay for the shipping cost. They were incentivized to choose a false reason (e.g., size doesn’t fit) for a free return. As predicted, higher proportion of individuals chose a false reason for economic incentive when interacted with an AI (vs. Human) service representative in the return process. This effect is mediated by a lower expected guilt.
The implication of the current research extends to various other contexts. In the future, AI will be involved in many decisions such as legal judgments, tax return calculation, loan application decisions. It was shown that individuals act more dishonestly when they are interacting with non-human agents. Thus, the current research suggests to various business institutions that AI should be utilized cautiously because it can increase dysfunctional behaviors which can increase costs to the institutions.