Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
01-07: TaeWoo Kim
Time:
Friday, 19/Jul/2019:
10:30am - 10:55am

Seminar Room 3-1

Chair: Bo Edvardsson


Show help for 'Increase or decrease the abstract text size'
Abstract

Cheating on AI

Authors: TaeWoo Kim (University of Technology Sydney, Canada), Hye Jin Lee (Indiana University), Yoo Sun Kim (N/A), Adam Duhachek (Indiana University)

Building on the burgeoning literature of consumer dishonesty, the current research examines whether consumers’ dishonest behaviors amplify when interacting with non-human artificial agents (e.g., AI or robots). Prior research has shown that unethical behavior is suppressed by personal or situational inhibitory factors such as guilt feelings (e.g., Ekman, 1985). That is, consumers forgo cheating opportunities for incentives when the action carries more anticipatory guilt. Based on prior work that artificial (vs. human) agents are considered to lack the ability to feel (Waytz and Norton 2004), we hypothesize that consumers would perceive cheating on non-human agents less harmful and feel less guilty, leading to greater engagement in unethical behavior. The hypothesis was supported by three experiments.

In Study 1, participants (N = 138 undergraduate) were told that they would get into a lottery to win $100 in case the number they chose matches with the number that an agent would randomly choose. The result shows that individuals cheat more when an AI Alexa (vs. human confederate) was introduced as the officiating agent. This study shows that people tend to behave more dishonestly when they are interacting with an AI (vs. human).

In extension of this finding into a marketing context, participants in Study 2 (N = 83 undergraduate) were provided with a hypotheticals scenario in which they were selling a used car to a car dealer on the internet. For a car that has true mileage of 80,000 miles but showing an incorrect odometer reading of 60,000 miles, they were asked to indicate the mileage that they will inform to a potential buyer of the car. The result shows that individuals reported lower mileage when they believed that they are reporting to an AI (vs. human) car dealer.

In Study 3, participants (N = 234 undergraduate) were instructed to imagine that they decided to return a product they bought online because they changed their mind and they will have to pay for the shipping cost. They were incentivized to choose a false reason (e.g., size doesn’t fit) for a free return. As predicted, higher proportion of individuals chose a false reason for economic incentive when interacted with an AI (vs. Human) service representative in the return process. This effect is mediated by a lower expected guilt.

The implication of the current research extends to various other contexts. In the future, AI will be involved in many decisions such as legal judgments, tax return calculation, loan application decisions. It was shown that individuals act more dishonestly when they are interacting with non-human agents. Thus, the current research suggests to various business institutions that AI should be utilized cautiously because it can increase dysfunctional behaviors which can increase costs to the institutions.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Frontiers 2019
Conference Software - ConfTool Pro 2.6.128
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany