Moral Decisions and the Externality of AI Usage

Project Start:06/2020
Status:Completed
Researchers:Victor Klockmann, Marie Claire Villeval, Marie-Claire Villeval, Alicia von Schenk
Area: Household Finance, Experiment Center
Funded by:SAFE

Topics & Objectives

In more and more situations, artificially intelligent (AI) algorithms have to model humans’ (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. Consider, for instance, AI that gives financial advice or takes investment decisions based on consumer behavior and observed preferences. The revealed willingness to accept risks or making unethical investments might decrease in the extent the AI learns from one's behavior, changing its decision making for successor investments.

The goal of our project was to find out whether individuals adjust the selfishness or prosocial behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm.

In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We estimate revealed social preferences when dictators knew that their decisions generate training data for an artificially intelligent algorithm. We let the algorithm make allocation decisions with monetary consequences in the present and the future. In our treatments, we manipulated the presence of an externality of the AI training data, the concern of the participants for the consequences of these training data on future participants affected by the AI, and uncertainty about future status.

Key Findings

  • Being informed of the externality of training data for an artificially intelligent algorithm did not affect the selfishness of decisions when the future status was certain.
  • When future status became uncertain and dictators could be harmed by the externality of their training data, intergenerational responsibility arose and the selfishness of decisions decreased. Changes in monetary incentives alone could not explain the change in revealed social preferences.
  • When the future status was uncertain, introducing an externality of AI training on the future significantly reduced the frequency of selfish choices, especially when efficiency could be improved by an altruistic choice.
  • Making individuals aware of the consequences of algorithmic training on future generations could induce prosocial behavior. However, this is only the case when individuals risk being harmed themselves by future algorithmic choices.

Policy Implications

One possible interpretation is that being more uncertain about one’s future situation leads individuals to take more distance from their immediate selfish interests and leads them to envision the situation more broadly from the beginning, in the spirit of John Rawls’ idea of taking decisions behind a veil of ignorance.

Related Published Papers

Author/sTitleYearAreaKeywords
Victor Klockmann, Marie Claire Villeval, Alicia von SchenkArtificial Intelligence, Ethics, and Intergenerational Responsibility
Journal of Economic Behavior & Organization
2023 Household Finance, Experiment Center

Related Working Papers

No.Author/sTitleYearAreaKeywords
335Victor Klockmann, Marie Claire Villeval, Alicia von SchenkArtificial Intelligence, Ethics, and Intergenerational Responsibility2022 Household Finance, Experiment Center
Back